Skip to content

Provide HTTP Endpoint for inference engine client #96

@tyler-griggs

Description

@tyler-griggs

Many agent scaffolds assume access to an OpenAI API compatible HTTP endpoint to get inference engine rollouts. Currently, we provide the generic InferenceEngineClient (which exposes the InferenceEngineInterface) to agent stacks to get model rollouts from generate(). This client should also expose an HTTP endpoint.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions