Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
31 changes: 31 additions & 0 deletions .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This workflow is triggered when a GitHub release is created.
# It can also be run manually to re-publish to PyPI in case it failed for some reason.
# You can run this workflow by navigating to https://www.github.com/digitalocean/genai-python/actions/workflows/publish-pypi.yml
name: Publish PyPI
on:
workflow_dispatch:

release:
types: [published]

jobs:
publish:
name: publish
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Install Rye
run: |
curl -sSf https://rye.astral.sh/get | bash
echo "$HOME/.rye/shims" >> $GITHUB_PATH
env:
RYE_VERSION: '0.44.0'
RYE_INSTALL_OPTION: '--yes'

- name: Publish to PyPI
run: |
bash ./bin/publish-pypi
env:
PYPI_TOKEN: ${{ secrets.DIGITALOCEAN_GENAI_SDK_PYPI_TOKEN || secrets.PYPI_TOKEN }}
21 changes: 21 additions & 0 deletions .github/workflows/release-doctor.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Release Doctor
on:
pull_request:
branches:
- main
workflow_dispatch:

jobs:
release_doctor:
name: release doctor
runs-on: ubuntu-latest
if: github.repository == 'digitalocean/genai-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')

steps:
- uses: actions/checkout@v4

- name: Check release environment
run: |
bash ./bin/check-release-environment
env:
PYPI_TOKEN: ${{ secrets.DIGITALOCEAN_GENAI_SDK_PYPI_TOKEN || secrets.PYPI_TOKEN }}
3 changes: 3 additions & 0 deletions .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
".": "0.1.0-alpha.1"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 126
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fdigitalocean-genai-sdk-bdf24159c6ebb5402d6c05a5165cb1501dc37cf6c664baa9eb318efb0f89dddd.yml
openapi_spec_hash: 686329a97002025d118dc2367755c18d
config_hash: 39a1554af43cd406e37b5ed5c943649c
configured_endpoints: 4
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fdigitalocean-genai-sdk-17838dec38ee8475c4bf4695b8dc70fe42a8f4da8ae9ffd415dc895b6628a952.yml
openapi_spec_hash: cfe5453e150989c8a9dbc9d7b4d1f76a
config_hash: 2da74b81015f4ef6cad3a0bcb9025834
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Changelog

## 0.1.0-alpha.1 (2025-06-04)

Full Changelog: [v0.0.1-alpha.0...v0.1.0-alpha.1](https://github.com/digitalocean/genai-python/compare/v0.0.1-alpha.0...v0.1.0-alpha.1)

### Features

* **api:** update via SDK Studio ([691923d](https://github.com/digitalocean/genai-python/commit/691923d9f60b5ebe5dc34c8227273d06448945e8))
* **client:** add follow_redirects request option ([5a6d480](https://github.com/digitalocean/genai-python/commit/5a6d480aef6d4c5084f484d1b69e6f49568a8caf))


### Chores

* **docs:** remove reference to rye shell ([29febe9](https://github.com/digitalocean/genai-python/commit/29febe9affcb0ae41ec69f8aea3ae6ef53967537))
* **docs:** remove unnecessary param examples ([35ec489](https://github.com/digitalocean/genai-python/commit/35ec48915a8bd750060634208e91bd98c905b53c))
* update SDK settings ([f032621](https://github.com/digitalocean/genai-python/commit/f03262136aa46e9325ac2fae785bf48a56f0127b))
* update SDK settings ([b2cf700](https://github.com/digitalocean/genai-python/commit/b2cf700a0419f7d6e3f23ee02747fe7766a05f98))
7 changes: 3 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@ $ rye sync --all-features
You can then run scripts using `rye run python script.py` or by activating the virtual environment:

```sh
$ rye shell
# or manually activate - https://docs.python.org/3/library/venv.html#how-venvs-work
# Activate the virtual environment - https://docs.python.org/3/library/venv.html#how-venvs-work
$ source .venv/bin/activate

# now you can omit the `rye run` prefix
Expand Down Expand Up @@ -63,7 +62,7 @@ If you’d like to use the repository from source, you can either install from g
To install via git:

```sh
$ pip install git+ssh://git@github.com/stainless-sdks/digitalocean-genai-sdk-python.git
$ pip install git+ssh://git@github.com/digitalocean/genai-python.git
```

Alternatively, you can build from source and install the wheel file:
Expand Down Expand Up @@ -121,7 +120,7 @@ the changes aren't made through the automated pipeline, you may want to make rel

### Publish with a GitHub workflow

You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/stainless-sdks/digitalocean-genai-sdk-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/digitalocean/genai-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.

### Publish manually

Expand Down
135 changes: 81 additions & 54 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Digitalocean Genai SDK Python API library

[![PyPI version](https://img.shields.io/pypi/v/digitalocean_genai_sdk.svg)](https://pypi.org/project/digitalocean_genai_sdk/)
[![PyPI version](https://img.shields.io/pypi/v/do-genai.svg)](https://pypi.org/project/do-genai/)

The Digitalocean Genai SDK Python library provides convenient access to the Digitalocean Genai SDK REST API from any Python 3.8+
application. The library includes type definitions for all request params and response fields,
Expand All @@ -15,13 +15,10 @@ The REST API documentation can be found on [help.openai.com](https://help.openai
## Installation

```sh
# install from this staging repo
pip install git+ssh://git@github.com/stainless-sdks/digitalocean-genai-sdk-python.git
# install from PyPI
pip install --pre do-genai
```

> [!NOTE]
> Once this package is [published to PyPI](https://app.stainless.com/docs/guides/publish), this will become: `pip install --pre digitalocean_genai_sdk`

## Usage

The full API of this library can be found in [api.md](api.md).
Expand All @@ -36,8 +33,16 @@ client = DigitaloceanGenaiSDK(
), # This is the default and can be omitted
)

assistants = client.assistants.list()
print(assistants.first_id)
create_response = client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
print(create_response.id)
```

While you can provide an `api_key` keyword argument,
Expand All @@ -62,8 +67,16 @@ client = AsyncDigitaloceanGenaiSDK(


async def main() -> None:
assistants = await client.assistants.list()
print(assistants.first_id)
create_response = await client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
print(create_response.id)


asyncio.run(main())
Expand All @@ -89,43 +102,19 @@ from digitalocean_genai_sdk import DigitaloceanGenaiSDK

client = DigitaloceanGenaiSDK()

assistant_object = client.assistants.create(
model="gpt-4o",
tool_resources={
"code_interpreter": {"file_ids": ["string"]},
"file_search": {
"vector_store_ids": ["string"],
"vector_stores": [
{
"chunking_strategy": {"type": "auto"},
"file_ids": ["string"],
"metadata": {"foo": "string"},
}
],
},
},
)
print(assistant_object.tool_resources)
```

## File uploads

Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.

```python
from pathlib import Path
from digitalocean_genai_sdk import DigitaloceanGenaiSDK

client = DigitaloceanGenaiSDK()

client.audio.transcribe_audio(
file=Path("/path/to/file"),
model="gpt-4o-transcribe",
create_response = client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
stream_options={},
)
print(create_response.stream_options)
```

The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.

## Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `digitalocean_genai_sdk.APIConnectionError` is raised.
Expand All @@ -142,7 +131,15 @@ from digitalocean_genai_sdk import DigitaloceanGenaiSDK
client = DigitaloceanGenaiSDK()

try:
client.assistants.list()
client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
except digitalocean_genai_sdk.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
Expand Down Expand Up @@ -185,7 +182,15 @@ client = DigitaloceanGenaiSDK(
)

# Or, configure per-request:
client.with_options(max_retries=5).assistants.list()
client.with_options(max_retries=5).chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
```

### Timeouts
Expand All @@ -208,7 +213,15 @@ client = DigitaloceanGenaiSDK(
)

# Override per-request:
client.with_options(timeout=5.0).assistants.list()
client.with_options(timeout=5.0).chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
)
```

On timeout, an `APITimeoutError` is thrown.
Expand Down Expand Up @@ -249,16 +262,22 @@ The "raw" Response object can be accessed by prefixing `.with_raw_response.` to
from digitalocean_genai_sdk import DigitaloceanGenaiSDK

client = DigitaloceanGenaiSDK()
response = client.assistants.with_raw_response.list()
response = client.chat.completions.with_raw_response.create(
messages=[{
"content": "string",
"role": "system",
}],
model="llama3-8b-instruct",
)
print(response.headers.get('X-My-Header'))

assistant = response.parse() # get the object that `assistants.list()` would have returned
print(assistant.first_id)
completion = response.parse() # get the object that `chat.completions.create()` would have returned
print(completion.id)
```

These methods return an [`APIResponse`](https://github.com/stainless-sdks/digitalocean-genai-sdk-python/tree/main/src/digitalocean_genai_sdk/_response.py) object.
These methods return an [`APIResponse`](https://github.com/digitalocean/genai-python/tree/main/src/digitalocean_genai_sdk/_response.py) object.

The async client returns an [`AsyncAPIResponse`](https://github.com/stainless-sdks/digitalocean-genai-sdk-python/tree/main/src/digitalocean_genai_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/genai-python/tree/main/src/digitalocean_genai_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.

#### `.with_streaming_response`

Expand All @@ -267,7 +286,15 @@ The above interface eagerly reads the full response body when you make the reque
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.

```python
with client.assistants.with_streaming_response.list() as response:
with client.chat.completions.with_streaming_response.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="llama3-8b-instruct",
) as response:
print(response.headers.get("X-My-Header"))

for line in response.iter_lines():
Expand Down Expand Up @@ -362,7 +389,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an [issue](https://www.github.com/stainless-sdks/digitalocean-genai-sdk-python/issues) with questions, bugs, or suggestions.
We are keen for your feedback; please open an [issue](https://www.github.com/digitalocean/genai-python/issues) with questions, bugs, or suggestions.

### Determining the installed version

Expand Down
Loading
Loading