Many agent scaffolds assume access to an OpenAI API compatible HTTP endpoint to get inference engine rollouts. Currently, we provide the generic InferenceEngineClient (which exposes the InferenceEngineInterface) to agent stacks to get model rollouts from generate(). This client should also expose an HTTP endpoint.