Skip to content

Commit 31809e8

Browse files
Merge remote-tracking branch 'origin/next' into next
2 parents 8eed3e5 + 00c62b3 commit 31809e8

File tree

523 files changed

+40235
-1342
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

523 files changed

+40235
-1342
lines changed

.stats.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 77
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-391afaae764eb758523b67805cb47ae3bc319dc119d83414afdd66f123ceaf5c.yml
3-
openapi_spec_hash: c773d792724f5647ae25a5ae4ccec208
4-
config_hash: 0bd094d86a010f7cbd5eb22ef548a29f
1+
configured_endpoints: 169
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-f8e8c290636c1e218efcf7bfe92ba7570c11690754d21287d838919fbc943a80.yml
3+
openapi_spec_hash: 1eddf488ecbe415efb45445697716f5d
4+
config_hash: c59a2f17744fc2b7a8248ec916b8aa70

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock
3636

3737
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
3838
result in merge conflicts between manual patches and changes from the generator. The generator will never
39-
modify the contents of the `src/gradientai/lib/` and `examples/` directories.
39+
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.
4040

4141
## Adding and running examples
4242

README.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The full API of this library can be found in [api.md](api.md).
2626

2727
```python
2828
import os
29-
from gradientai import GradientAI
29+
from do_gradientai import GradientAI
3030

3131
api_client = GradientAI(
3232
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -66,7 +66,7 @@ Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with eac
6666
```python
6767
import os
6868
import asyncio
69-
from gradientai import AsyncGradientAI
69+
from do_gradientai import AsyncGradientAI
7070

7171
client = AsyncGradientAI(
7272
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -106,8 +106,8 @@ Then you can enable it by instantiating the client with `http_client=DefaultAioH
106106

107107
```python
108108
import asyncio
109-
from gradientai import DefaultAioHttpClient
110-
from gradientai import AsyncGradientAI
109+
from do_gradientai import DefaultAioHttpClient
110+
from do_gradientai import AsyncGradientAI
111111

112112

113113
async def main() -> None:
@@ -135,7 +135,7 @@ asyncio.run(main())
135135
We provide support for streaming responses using Server Side Events (SSE).
136136

137137
```python
138-
from gradientai import GradientAI
138+
from do_gradientai import GradientAI
139139

140140
client = GradientAI()
141141

@@ -156,7 +156,7 @@ for completion in stream:
156156
The async client uses the exact same interface.
157157

158158
```python
159-
from gradientai import AsyncGradientAI
159+
from do_gradientai import AsyncGradientAI
160160

161161
client = AsyncGradientAI()
162162

@@ -188,7 +188,7 @@ Typed requests and responses provide autocomplete and documentation within your
188188
Nested parameters are dictionaries, typed using `TypedDict`, for example:
189189

190190
```python
191-
from gradientai import GradientAI
191+
from do_gradientai import GradientAI
192192

193193
client = GradientAI()
194194

@@ -207,16 +207,16 @@ print(completion.stream_options)
207207

208208
## Handling errors
209209

210-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
210+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
211211

212212
When the API returns a non-success status code (that is, 4xx or 5xx
213-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
213+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
214214

215-
All errors inherit from `gradientai.APIError`.
215+
All errors inherit from `do_gradientai.APIError`.
216216

217217
```python
218-
import gradientai
219-
from gradientai import GradientAI
218+
import do_gradientai
219+
from do_gradientai import GradientAI
220220

221221
client = GradientAI()
222222

@@ -230,12 +230,12 @@ try:
230230
],
231231
model="llama3.3-70b-instruct",
232232
)
233-
except gradientai.APIConnectionError as e:
233+
except do_gradientai.APIConnectionError as e:
234234
print("The server could not be reached")
235235
print(e.__cause__) # an underlying Exception, likely raised within httpx.
236-
except gradientai.RateLimitError as e:
236+
except do_gradientai.RateLimitError as e:
237237
print("A 429 status code was received; we should back off a bit.")
238-
except gradientai.APIStatusError as e:
238+
except do_gradientai.APIStatusError as e:
239239
print("Another non-200-range status code was received")
240240
print(e.status_code)
241241
print(e.response)
@@ -263,7 +263,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
263263
You can use the `max_retries` option to configure or disable retry settings:
264264

265265
```python
266-
from gradientai import GradientAI
266+
from do_gradientai import GradientAI
267267

268268
# Configure the default for all requests:
269269
client = GradientAI(
@@ -289,7 +289,7 @@ By default requests time out after 1 minute. You can configure this with a `time
289289
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
290290

291291
```python
292-
from gradientai import GradientAI
292+
from do_gradientai import GradientAI
293293

294294
# Configure the default for all requests:
295295
client = GradientAI(
@@ -349,7 +349,7 @@ if response.my_field is None:
349349
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
350350

351351
```py
352-
from gradientai import GradientAI
352+
from do_gradientai import GradientAI
353353

354354
client = GradientAI()
355355
response = client.chat.completions.with_raw_response.create(
@@ -365,9 +365,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
365365
print(completion.choices)
366366
```
367367

368-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
368+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
369369

370-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
370+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
371371

372372
#### `.with_streaming_response`
373373

@@ -437,7 +437,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
437437

438438
```python
439439
import httpx
440-
from gradientai import GradientAI, DefaultHttpxClient
440+
from do_gradientai import GradientAI, DefaultHttpxClient
441441

442442
client = GradientAI(
443443
# Or use the `GRADIENT_AI_BASE_URL` env var
@@ -460,7 +460,7 @@ client.with_options(http_client=DefaultHttpxClient(...))
460460
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
461461

462462
```py
463-
from gradientai import GradientAI
463+
from do_gradientai import GradientAI
464464

465465
with GradientAI() as client:
466466
# make requests here
@@ -488,8 +488,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
488488
You can determine the version that is being used at runtime with:
489489

490490
```py
491-
import gradientai
492-
print(gradientai.__version__)
491+
import do_gradientai
492+
print(do_gradientai.__version__)
493493
```
494494

495495
## Requirements

0 commit comments

Comments
 (0)