Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: DIA-1384: Estimate cost for an inference run #330

Merged
merged 7 commits into from
Oct 23, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .mock/definition/__package__.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2347,6 +2347,19 @@ types:
organization: optional<PromptVersionOrganization>
source:
openapi: openapi/openapi.yaml
InferenceRunCostEstimate:
properties:
prompt_cost_usd:
type: optional<string>
docs: Cost of the prompt (in USD)
completion_cost_usd:
type: optional<string>
docs: Cost of the completion (in USD)
total_cost_usd:
type: optional<string>
docs: Total cost of the inference (in USD)
source:
openapi: openapi/openapi.yaml
RefinedPromptResponseRefinementStatus:
enum:
- Pending
Expand Down
45 changes: 45 additions & 0 deletions .mock/definition/prompts/versions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,51 @@ service:
organization: 1
audiences:
- public
cost_estimate:
path: /api/prompts/{prompt_id}/versions/{version_id}/cost-estimate
method: POST
auth: true
docs: >
Get cost estimate for running a prompt version on a particular
project/subset
path-parameters:
prompt_id:
type: integer
docs: Prompt ID
version_id:
type: integer
docs: Prompt Version ID
display-name: >-
Get cost estimate for running a prompt version on a particular
project/subset
request:
name: VersionsCostEstimateRequest
query-parameters:
project_id:
type: integer
docs: ID of the project to get an estimate for running on
project_subset:
type: integer
docs: >-
Subset of the project to get an estimate for running on (e.g.
'All', 'Sample', or 'HasGT')
response:
docs: ''
type: root.InferenceRunCostEstimate
examples:
- path-parameters:
prompt_id: 1
version_id: 1
query-parameters:
project_id: 1
project_subset: 1
response:
body:
prompt_cost_usd: prompt_cost_usd
completion_cost_usd: completion_cost_usd
total_cost_usd: total_cost_usd
audiences:
- public
get_refined_prompt:
path: /api/prompts/{prompt_id}/versions/{version_id}/refine
method: GET
Expand Down
197 changes: 106 additions & 91 deletions poetry.lock

Large diffs are not rendered by default.

97 changes: 97 additions & 0 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -15161,6 +15161,103 @@ client.prompts.versions.update(
</dl>


</dd>
</dl>
</details>

<details><summary><code>client.prompts.versions.<a href="src/label_studio_sdk/prompts/versions/client.py">cost_estimate</a>(...)</code></summary>
<dl>
<dd>

#### 📝 Description

<dl>
<dd>

<dl>
<dd>

Get cost estimate for running a prompt version on a particular project/subset
</dd>
</dl>
</dd>
</dl>

#### 🔌 Usage

<dl>
<dd>

<dl>
<dd>

```python
from label_studio_sdk.client import LabelStudio

client = LabelStudio(
api_key="YOUR_API_KEY",
)
client.prompts.versions.cost_estimate(
prompt_id=1,
version_id=1,
project_id=1,
project_subset=1,
)

```
</dd>
</dl>
</dd>
</dl>

#### ⚙️ Parameters

<dl>
<dd>

<dl>
<dd>

**prompt_id:** `int` — Prompt ID

</dd>
</dl>

<dl>
<dd>

**version_id:** `int` — Prompt Version ID

</dd>
</dl>

<dl>
<dd>

**project_id:** `int` — ID of the project to get an estimate for running on

</dd>
</dl>

<dl>
<dd>

**project_subset:** `int` — Subset of the project to get an estimate for running on (e.g. 'All', 'Sample', or 'HasGT')

</dd>
</dl>

<dl>
<dd>

**request_options:** `typing.Optional[RequestOptions]` — Request-specific configuration.

</dd>
</dl>
</dd>
</dl>


</dd>
</dl>
</details>
Expand Down
2 changes: 2 additions & 0 deletions src/label_studio_sdk/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
GcsImportStorage,
GcsImportStorageStatus,
InferenceRun,
InferenceRunCostEstimate,
InferenceRunCreatedBy,
InferenceRunOrganization,
InferenceRunProjectSubset,
Expand Down Expand Up @@ -222,6 +223,7 @@
"GcsImportStorageStatus",
"ImportStorageListTypesResponseItem",
"InferenceRun",
"InferenceRunCostEstimate",
"InferenceRunCreatedBy",
"InferenceRunOrganization",
"InferenceRunProjectSubset",
Expand Down
125 changes: 125 additions & 0 deletions src/label_studio_sdk/prompts/versions/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from ...core.jsonable_encoder import jsonable_encoder
from ...core.pydantic_utilities import pydantic_v1
from ...core.request_options import RequestOptions
from ...types.inference_run_cost_estimate import InferenceRunCostEstimate
from ...types.prompt_version import PromptVersion
from ...types.prompt_version_created_by import PromptVersionCreatedBy
from ...types.prompt_version_organization import PromptVersionOrganization
Expand Down Expand Up @@ -336,6 +337,68 @@ def update(
raise ApiError(status_code=_response.status_code, body=_response.text)
raise ApiError(status_code=_response.status_code, body=_response_json)

def cost_estimate(
self,
prompt_id: int,
version_id: int,
*,
project_id: int,
project_subset: int,
request_options: typing.Optional[RequestOptions] = None,
) -> InferenceRunCostEstimate:
"""
Get cost estimate for running a prompt version on a particular project/subset

Parameters
----------
prompt_id : int
Prompt ID

version_id : int
Prompt Version ID

project_id : int
ID of the project to get an estimate for running on

project_subset : int
Subset of the project to get an estimate for running on (e.g. 'All', 'Sample', or 'HasGT')

request_options : typing.Optional[RequestOptions]
Request-specific configuration.

Returns
-------
InferenceRunCostEstimate


Examples
--------
from label_studio_sdk.client import LabelStudio

client = LabelStudio(
api_key="YOUR_API_KEY",
)
client.prompts.versions.cost_estimate(
prompt_id=1,
version_id=1,
project_id=1,
project_subset=1,
)
"""
_response = self._client_wrapper.httpx_client.request(
f"api/prompts/{jsonable_encoder(prompt_id)}/versions/{jsonable_encoder(version_id)}/cost-estimate",
method="POST",
params={"project_id": project_id, "project_subset": project_subset},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
return pydantic_v1.parse_obj_as(InferenceRunCostEstimate, _response.json()) # type: ignore
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, body=_response.text)
raise ApiError(status_code=_response.status_code, body=_response_json)

def get_refined_prompt(
self,
prompt_id: int,
Expand Down Expand Up @@ -789,6 +852,68 @@ async def update(
raise ApiError(status_code=_response.status_code, body=_response.text)
raise ApiError(status_code=_response.status_code, body=_response_json)

async def cost_estimate(
self,
prompt_id: int,
version_id: int,
*,
project_id: int,
project_subset: int,
request_options: typing.Optional[RequestOptions] = None,
) -> InferenceRunCostEstimate:
"""
Get cost estimate for running a prompt version on a particular project/subset

Parameters
----------
prompt_id : int
Prompt ID

version_id : int
Prompt Version ID

project_id : int
ID of the project to get an estimate for running on

project_subset : int
Subset of the project to get an estimate for running on (e.g. 'All', 'Sample', or 'HasGT')

request_options : typing.Optional[RequestOptions]
Request-specific configuration.

Returns
-------
InferenceRunCostEstimate


Examples
--------
from label_studio_sdk.client import AsyncLabelStudio

client = AsyncLabelStudio(
api_key="YOUR_API_KEY",
)
await client.prompts.versions.cost_estimate(
prompt_id=1,
version_id=1,
project_id=1,
project_subset=1,
)
"""
_response = await self._client_wrapper.httpx_client.request(
f"api/prompts/{jsonable_encoder(prompt_id)}/versions/{jsonable_encoder(version_id)}/cost-estimate",
method="POST",
params={"project_id": project_id, "project_subset": project_subset},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
return pydantic_v1.parse_obj_as(InferenceRunCostEstimate, _response.json()) # type: ignore
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, body=_response.text)
raise ApiError(status_code=_response.status_code, body=_response_json)

async def get_refined_prompt(
self,
prompt_id: int,
Expand Down
2 changes: 2 additions & 0 deletions src/label_studio_sdk/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
from .gcs_import_storage import GcsImportStorage
from .gcs_import_storage_status import GcsImportStorageStatus
from .inference_run import InferenceRun
from .inference_run_cost_estimate import InferenceRunCostEstimate
from .inference_run_created_by import InferenceRunCreatedBy
from .inference_run_organization import InferenceRunOrganization
from .inference_run_project_subset import InferenceRunProjectSubset
Expand Down Expand Up @@ -130,6 +131,7 @@
"GcsImportStorage",
"GcsImportStorageStatus",
"InferenceRun",
"InferenceRunCostEstimate",
"InferenceRunCreatedBy",
"InferenceRunOrganization",
"InferenceRunProjectSubset",
Expand Down
42 changes: 42 additions & 0 deletions src/label_studio_sdk/types/inference_run_cost_estimate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# This file was auto-generated by Fern from our API Definition.

import datetime as dt
import typing

from ..core.datetime_utils import serialize_datetime
from ..core.pydantic_utilities import deep_union_pydantic_dicts, pydantic_v1


class InferenceRunCostEstimate(pydantic_v1.BaseModel):
prompt_cost_usd: typing.Optional[str] = pydantic_v1.Field(default=None)
"""
Cost of the prompt (in USD)
"""

completion_cost_usd: typing.Optional[str] = pydantic_v1.Field(default=None)
"""
Cost of the completion (in USD)
"""

total_cost_usd: typing.Optional[str] = pydantic_v1.Field(default=None)
"""
Total cost of the inference (in USD)
"""

def json(self, **kwargs: typing.Any) -> str:
kwargs_with_defaults: typing.Any = {"by_alias": True, "exclude_unset": True, **kwargs}
return super().json(**kwargs_with_defaults)

def dict(self, **kwargs: typing.Any) -> typing.Dict[str, typing.Any]:
kwargs_with_defaults_exclude_unset: typing.Any = {"by_alias": True, "exclude_unset": True, **kwargs}
kwargs_with_defaults_exclude_none: typing.Any = {"by_alias": True, "exclude_none": True, **kwargs}

return deep_union_pydantic_dicts(
super().dict(**kwargs_with_defaults_exclude_unset), super().dict(**kwargs_with_defaults_exclude_none)
)

class Config:
frozen = True
smart_union = True
extra = pydantic_v1.Extra.allow
json_encoders = {dt.datetime: serialize_datetime}
Loading