Skip to content

Commit fda6270

Browse files
feat(api): manual updates
1 parent 7548648 commit fda6270

File tree

506 files changed

+398
-398
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

506 files changed

+398
-398
lines changed

.stats.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 168
22
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-f8e8c290636c1e218efcf7bfe92ba7570c11690754d21287d838919fbc943a80.yml
33
openapi_spec_hash: 1eddf488ecbe415efb45445697716f5d
4-
config_hash: 732232c90ba4600bc44b6a96e14beb96
4+
config_hash: 5cf9c7359c13307780aa25d0203b0b35

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock
3636

3737
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
3838
result in merge conflicts between manual patches and changes from the generator. The generator will never
39-
modify the contents of the `src/gradientai/lib/` and `examples/` directories.
39+
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.
4040

4141
## Adding and running examples
4242

README.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The full API of this library can be found in [api.md](api.md).
3636

3737
```python
3838
import os
39-
from gradientai import GradientAI
39+
from do_gradientai import GradientAI
4040

4141
api_client = GradientAI(
4242
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -99,7 +99,7 @@ Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with eac
9999
```python
100100
import os
101101
import asyncio
102-
from gradientai import AsyncGradientAI
102+
from do_gradientai import AsyncGradientAI
103103

104104
client = AsyncGradientAI(
105105
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -139,8 +139,8 @@ Then you can enable it by instantiating the client with `http_client=DefaultAioH
139139

140140
```python
141141
import asyncio
142-
from gradientai import DefaultAioHttpClient
143-
from gradientai import AsyncGradientAI
142+
from do_gradientai import DefaultAioHttpClient
143+
from do_gradientai import AsyncGradientAI
144144

145145

146146
async def main() -> None:
@@ -168,7 +168,7 @@ asyncio.run(main())
168168
We provide support for streaming responses using Server Side Events (SSE).
169169

170170
```python
171-
from gradientai import GradientAI
171+
from do_gradientai import GradientAI
172172

173173
client = GradientAI()
174174

@@ -189,7 +189,7 @@ for completion in stream:
189189
The async client uses the exact same interface.
190190

191191
```python
192-
from gradientai import AsyncGradientAI
192+
from do_gradientai import AsyncGradientAI
193193

194194
client = AsyncGradientAI()
195195

@@ -221,7 +221,7 @@ Typed requests and responses provide autocomplete and documentation within your
221221
Nested parameters are dictionaries, typed using `TypedDict`, for example:
222222

223223
```python
224-
from gradientai import GradientAI
224+
from do_gradientai import GradientAI
225225

226226
client = GradientAI()
227227

@@ -240,16 +240,16 @@ print(completion.stream_options)
240240

241241
## Handling errors
242242

243-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
243+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
244244

245245
When the API returns a non-success status code (that is, 4xx or 5xx
246-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
246+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
247247

248-
All errors inherit from `gradientai.APIError`.
248+
All errors inherit from `do_gradientai.APIError`.
249249

250250
```python
251-
import gradientai
252-
from gradientai import GradientAI
251+
import do_gradientai
252+
from do_gradientai import GradientAI
253253

254254
client = GradientAI()
255255

@@ -263,12 +263,12 @@ try:
263263
],
264264
model="llama3.3-70b-instruct",
265265
)
266-
except gradientai.APIConnectionError as e:
266+
except do_gradientai.APIConnectionError as e:
267267
print("The server could not be reached")
268268
print(e.__cause__) # an underlying Exception, likely raised within httpx.
269-
except gradientai.RateLimitError as e:
269+
except do_gradientai.RateLimitError as e:
270270
print("A 429 status code was received; we should back off a bit.")
271-
except gradientai.APIStatusError as e:
271+
except do_gradientai.APIStatusError as e:
272272
print("Another non-200-range status code was received")
273273
print(e.status_code)
274274
print(e.response)
@@ -296,7 +296,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
296296
You can use the `max_retries` option to configure or disable retry settings:
297297

298298
```python
299-
from gradientai import GradientAI
299+
from do_gradientai import GradientAI
300300

301301
# Configure the default for all requests:
302302
client = GradientAI(
@@ -322,7 +322,7 @@ By default requests time out after 1 minute. You can configure this with a `time
322322
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
323323

324324
```python
325-
from gradientai import GradientAI
325+
from do_gradientai import GradientAI
326326

327327
# Configure the default for all requests:
328328
client = GradientAI(
@@ -382,7 +382,7 @@ if response.my_field is None:
382382
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
383383

384384
```py
385-
from gradientai import GradientAI
385+
from do_gradientai import GradientAI
386386

387387
client = GradientAI()
388388
response = client.chat.completions.with_raw_response.create(
@@ -398,9 +398,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
398398
print(completion.choices)
399399
```
400400

401-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
401+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
402402

403-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
403+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
404404

405405
#### `.with_streaming_response`
406406

@@ -470,7 +470,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
470470

471471
```python
472472
import httpx
473-
from gradientai import GradientAI, DefaultHttpxClient
473+
from do_gradientai import GradientAI, DefaultHttpxClient
474474

475475
client = GradientAI(
476476
# Or use the `GRADIENT_AI_BASE_URL` env var
@@ -493,7 +493,7 @@ client.with_options(http_client=DefaultHttpxClient(...))
493493
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
494494

495495
```py
496-
from gradientai import GradientAI
496+
from do_gradientai import GradientAI
497497

498498
with GradientAI() as client:
499499
# make requests here
@@ -521,8 +521,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
521521
You can determine the version that is being used at runtime with:
522522

523523
```py
524-
import gradientai
525-
print(gradientai.__version__)
524+
import do_gradientai
525+
print(do_gradientai.__version__)
526526
```
527527

528528
## Requirements

0 commit comments

Comments
 (0)