Skip to content

Commit cecff02

Browse files
stainless-app[bot]dgellowRobertCraigiepaperspace-philip
authored
release: 0.1.0-alpha.5 (#11)
* feat(api): update via SDK Studio * chore(internal): codegen related update * codegen metadata * feat(api): update via SDK Studio * feat(api): update via SDK Studio * codegen metadata * feat(api): update via SDK Studio * feat(api): update via SDK Studio * feat(api): update via SDK Studio * feat(api): update via SDK Studio * feat(api): update via SDK Studio * feat(api): update via SDK Studio * feat(api): update via SDK Studio * codegen metadata * codegen metadata * feat(api): define api links and meta as shared models * feat(api): update OpenAI spec and add endpoint/smodels * feat: use inference key for chat.completions.create() * fix(ci): release-doctor — report correct token name * Update src/do_gradientai/resources/chat/completions.py Co-authored-by: Robert Craigie <robert@craigie.dev> * release: 0.1.0-alpha.5 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com> Co-authored-by: Samuel El-Borai <sam@elborai.me> Co-authored-by: Robert Craigie <robert@craigie.dev> Co-authored-by: paperspace-philip <58186049+paperspace-philip@users.noreply.github.com>
1 parent 78637e9 commit cecff02

File tree

293 files changed

+4932
-2984
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

293 files changed

+4932
-2984
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.1.0-alpha.4"
2+
".": "0.1.0-alpha.5"
33
}

.stats.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 70
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-e40feaac59c85aace6aa42d2749b20e0955dbbae58b06c3a650bc03adafcd7b5.yml
3-
openapi_spec_hash: 825c1a4816938e9f594b7a8c06692667
4-
config_hash: 211ece2994c6ac52f84f78ee56c1097a
1+
configured_endpoints: 77
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-e8b3cbc80e18e4f7f277010349f25e1319156704f359911dc464cc21a0d077a6.yml
3+
openapi_spec_hash: c773d792724f5647ae25a5ae4ccec208
4+
config_hash: ecf128ea21a8fead9dabb9609c4dbce8

CHANGELOG.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,36 @@
11
# Changelog
22

3+
## 0.1.0-alpha.5 (2025-06-27)
4+
5+
Full Changelog: [v0.1.0-alpha.4...v0.1.0-alpha.5](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.4...v0.1.0-alpha.5)
6+
7+
### Features
8+
9+
* **api:** define api links and meta as shared models ([8d87001](https://github.com/digitalocean/gradientai-python/commit/8d87001b51de17dd1a36419c0e926cef119f20b8))
10+
* **api:** update OpenAI spec and add endpoint/smodels ([e92c54b](https://github.com/digitalocean/gradientai-python/commit/e92c54b05f1025b6173945524724143fdafc7728))
11+
* **api:** update via SDK Studio ([1ae76f7](https://github.com/digitalocean/gradientai-python/commit/1ae76f78ce9e74f8fd555e3497299127e9aa6889))
12+
* **api:** update via SDK Studio ([98424f4](https://github.com/digitalocean/gradientai-python/commit/98424f4a2c7e00138fb5eecf94ca72e2ffcc1212))
13+
* **api:** update via SDK Studio ([299fd1b](https://github.com/digitalocean/gradientai-python/commit/299fd1b29b42f6f2581150e52dcf65fc73270862))
14+
* **api:** update via SDK Studio ([9a45427](https://github.com/digitalocean/gradientai-python/commit/9a45427678644c34afe9792a2561f394718e64ff))
15+
* **api:** update via SDK Studio ([abe573f](https://github.com/digitalocean/gradientai-python/commit/abe573fcc2233c7d71f0a925eea8fa9dd4d0fb91))
16+
* **api:** update via SDK Studio ([e5ce590](https://github.com/digitalocean/gradientai-python/commit/e5ce59057792968892317215078ac2c11e811812))
17+
* **api:** update via SDK Studio ([1daa3f5](https://github.com/digitalocean/gradientai-python/commit/1daa3f55a49b5411d1b378fce30aea3ccbccb6d7))
18+
* **api:** update via SDK Studio ([1c702b3](https://github.com/digitalocean/gradientai-python/commit/1c702b340e4fd723393c0f02df2a87d03ca8c9bb))
19+
* **api:** update via SDK Studio ([891d6b3](https://github.com/digitalocean/gradientai-python/commit/891d6b32e5bdb07d23abf898cec17a60ee64f99d))
20+
* **api:** update via SDK Studio ([dcbe442](https://github.com/digitalocean/gradientai-python/commit/dcbe442efc67554e60b3b28360a4d9f7dcbb313a))
21+
* use inference key for chat.completions.create() ([5d38e2e](https://github.com/digitalocean/gradientai-python/commit/5d38e2eb8604a0a4065d146ba71aa4a5a0e93d85))
22+
23+
24+
### Bug Fixes
25+
26+
* **ci:** release-doctor — report correct token name ([4d2b3dc](https://github.com/digitalocean/gradientai-python/commit/4d2b3dcefdefc3830d631c5ac27b58778a299983))
27+
28+
29+
### Chores
30+
31+
* clean up pyproject ([78637e9](https://github.com/digitalocean/gradientai-python/commit/78637e99816d459c27b4f2fd2f6d79c8d32ecfbe))
32+
* **internal:** codegen related update ([58d7319](https://github.com/digitalocean/gradientai-python/commit/58d7319ce68c639c2151a3e96a5d522ec06ff96f))
33+
334
## 0.1.0-alpha.4 (2025-06-25)
435

536
Full Changelog: [v0.1.0-alpha.3...v0.1.0-alpha.4](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.3...v0.1.0-alpha.4)

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock
3636

3737
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
3838
result in merge conflicts between manual patches and changes from the generator. The generator will never
39-
modify the contents of the `src/gradientai/lib/` and `examples/` directories.
39+
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.
4040

4141
## Adding and running examples
4242

README.md

Lines changed: 59 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -25,16 +25,22 @@ The full API of this library can be found in [api.md](api.md).
2525

2626
```python
2727
import os
28-
from gradientai import GradientAI
28+
from do_gradientai import GradientAI
2929

3030
client = GradientAI(
3131
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
3232
)
3333

34-
versions = client.agents.versions.list(
35-
uuid="REPLACE_ME",
34+
completion = client.chat.completions.create(
35+
messages=[
36+
{
37+
"content": "string",
38+
"role": "system",
39+
}
40+
],
41+
model="llama3-8b-instruct",
3642
)
37-
print(versions.agent_versions)
43+
print(completion.id)
3844
```
3945

4046
While you can provide an `api_key` keyword argument,
@@ -49,18 +55,24 @@ Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with eac
4955
```python
5056
import os
5157
import asyncio
52-
from gradientai import AsyncGradientAI
58+
from do_gradientai import AsyncGradientAI
5359

5460
client = AsyncGradientAI(
5561
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
5662
)
5763

5864

5965
async def main() -> None:
60-
versions = await client.agents.versions.list(
61-
uuid="REPLACE_ME",
66+
completion = await client.chat.completions.create(
67+
messages=[
68+
{
69+
"content": "string",
70+
"role": "system",
71+
}
72+
],
73+
model="llama3-8b-instruct",
6274
)
63-
print(versions.agent_versions)
75+
print(completion.id)
6476

6577

6678
asyncio.run(main())
@@ -84,19 +96,25 @@ Then you can enable it by instantiating the client with `http_client=DefaultAioH
8496
```python
8597
import os
8698
import asyncio
87-
from gradientai import DefaultAioHttpClient
88-
from gradientai import AsyncGradientAI
99+
from do_gradientai import DefaultAioHttpClient
100+
from do_gradientai import AsyncGradientAI
89101

90102

91103
async def main() -> None:
92104
async with AsyncGradientAI(
93105
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
94106
http_client=DefaultAioHttpClient(),
95107
) as client:
96-
versions = await client.agents.versions.list(
97-
uuid="REPLACE_ME",
108+
completion = await client.chat.completions.create(
109+
messages=[
110+
{
111+
"content": "string",
112+
"role": "system",
113+
}
114+
],
115+
model="llama3-8b-instruct",
98116
)
99-
print(versions.agent_versions)
117+
print(completion.id)
100118

101119

102120
asyncio.run(main())
@@ -116,41 +134,48 @@ Typed requests and responses provide autocomplete and documentation within your
116134
Nested parameters are dictionaries, typed using `TypedDict`, for example:
117135

118136
```python
119-
from gradientai import GradientAI
137+
from do_gradientai import GradientAI
120138

121139
client = GradientAI()
122140

123-
evaluation_test_case = client.regions.evaluation_test_cases.create(
124-
star_metric={},
141+
completion = client.chat.completions.create(
142+
messages=[
143+
{
144+
"content": "string",
145+
"role": "system",
146+
}
147+
],
148+
model="llama3-8b-instruct",
149+
stream_options={},
125150
)
126-
print(evaluation_test_case.star_metric)
151+
print(completion.stream_options)
127152
```
128153

129154
## Handling errors
130155

131-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
156+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
132157

133158
When the API returns a non-success status code (that is, 4xx or 5xx
134-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
159+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
135160

136-
All errors inherit from `gradientai.APIError`.
161+
All errors inherit from `do_gradientai.APIError`.
137162

138163
```python
139-
import gradientai
140-
from gradientai import GradientAI
164+
import do_gradientai
165+
from do_gradientai import GradientAI
141166

142167
client = GradientAI()
143168

144169
try:
145170
client.agents.versions.list(
146171
uuid="REPLACE_ME",
147172
)
148-
except gradientai.APIConnectionError as e:
173+
except do_gradientai.APIConnectionError as e:
149174
print("The server could not be reached")
150175
print(e.__cause__) # an underlying Exception, likely raised within httpx.
151-
except gradientai.RateLimitError as e:
176+
except do_gradientai.RateLimitError as e:
152177
print("A 429 status code was received; we should back off a bit.")
153-
except gradientai.APIStatusError as e:
178+
except do_gradientai.APIStatusError as e:
154179
print("Another non-200-range status code was received")
155180
print(e.status_code)
156181
print(e.response)
@@ -178,7 +203,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
178203
You can use the `max_retries` option to configure or disable retry settings:
179204

180205
```python
181-
from gradientai import GradientAI
206+
from do_gradientai import GradientAI
182207

183208
# Configure the default for all requests:
184209
client = GradientAI(
@@ -198,7 +223,7 @@ By default requests time out after 1 minute. You can configure this with a `time
198223
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
199224

200225
```python
201-
from gradientai import GradientAI
226+
from do_gradientai import GradientAI
202227

203228
# Configure the default for all requests:
204229
client = GradientAI(
@@ -252,7 +277,7 @@ if response.my_field is None:
252277
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
253278

254279
```py
255-
from gradientai import GradientAI
280+
from do_gradientai import GradientAI
256281

257282
client = GradientAI()
258283
response = client.agents.versions.with_raw_response.list(
@@ -264,9 +289,9 @@ version = response.parse() # get the object that `agents.versions.list()` would
264289
print(version.agent_versions)
265290
```
266291

267-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
292+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
268293

269-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
294+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
270295

271296
#### `.with_streaming_response`
272297

@@ -330,7 +355,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
330355

331356
```python
332357
import httpx
333-
from gradientai import GradientAI, DefaultHttpxClient
358+
from do_gradientai import GradientAI, DefaultHttpxClient
334359

335360
client = GradientAI(
336361
# Or use the `GRADIENT_AI_BASE_URL` env var
@@ -353,7 +378,7 @@ client.with_options(http_client=DefaultHttpxClient(...))
353378
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
354379

355380
```py
356-
from gradientai import GradientAI
381+
from do_gradientai import GradientAI
357382

358383
with GradientAI() as client:
359384
# make requests here
@@ -381,8 +406,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
381406
You can determine the version that is being used at runtime with:
382407

383408
```py
384-
import gradientai
385-
print(gradientai.__version__)
409+
import do_gradientai
410+
print(do_gradientai.__version__)
386411
```
387412

388413
## Requirements

0 commit comments

Comments
 (0)