You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
210
+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
211
211
212
212
When the API returns a non-success status code (that is, 4xx or 5xx
213
-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
213
+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
214
214
215
-
All errors inherit from `gradientai.APIError`.
215
+
All errors inherit from `do_gradientai.APIError`.
216
216
217
217
```python
218
-
importgradientai
219
-
fromgradientaiimport GradientAI
218
+
importdo_gradientai
219
+
fromdo_gradientaiimport GradientAI
220
220
221
221
client = GradientAI()
222
222
@@ -230,12 +230,12 @@ try:
230
230
],
231
231
model="llama3.3-70b-instruct",
232
232
)
233
-
exceptgradientai.APIConnectionError as e:
233
+
exceptdo_gradientai.APIConnectionError as e:
234
234
print("The server could not be reached")
235
235
print(e.__cause__) # an underlying Exception, likely raised within httpx.
236
-
exceptgradientai.RateLimitError as e:
236
+
exceptdo_gradientai.RateLimitError as e:
237
237
print("A 429 status code was received; we should back off a bit.")
238
-
exceptgradientai.APIStatusError as e:
238
+
exceptdo_gradientai.APIStatusError as e:
239
239
print("Another non-200-range status code was received")
240
240
print(e.status_code)
241
241
print(e.response)
@@ -263,7 +263,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
263
263
You can use the `max_retries` option to configure or disable retry settings:
264
264
265
265
```python
266
-
fromgradientaiimport GradientAI
266
+
fromdo_gradientaiimport GradientAI
267
267
268
268
# Configure the default for all requests:
269
269
client = GradientAI(
@@ -289,7 +289,7 @@ By default requests time out after 1 minute. You can configure this with a `time
289
289
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
290
290
291
291
```python
292
-
fromgradientaiimport GradientAI
292
+
fromdo_gradientaiimport GradientAI
293
293
294
294
# Configure the default for all requests:
295
295
client = GradientAI(
@@ -349,7 +349,7 @@ if response.my_field is None:
349
349
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
@@ -365,9 +365,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
365
365
print(completion.choices)
366
366
```
367
367
368
-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
368
+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
369
369
370
-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
370
+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
371
371
372
372
#### `.with_streaming_response`
373
373
@@ -437,7 +437,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
461
461
462
462
```py
463
-
fromgradientaiimport GradientAI
463
+
fromdo_gradientaiimport GradientAI
464
464
465
465
with GradientAI() as client:
466
466
# make requests here
@@ -488,8 +488,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
488
488
You can determine the version that is being used at runtime with:
0 commit comments