Skip to content

Conversation

yanxi0830
Copy link
Contributor

@yanxi0830 yanxi0830 commented Jan 31, 2025

What does this PR do?

Test Plan

LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v tests/client-sdk/inference/test_inference.py

LLAMA_STACK_CONFIG="./llama_stack/templates/fireworks/run.yaml" pytest -v tests/inference/test_inference.py

LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v tests/client-sdk/agents/test_agents.py::test_agent_simple

Sources

Please link relevant resources if necessary.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Ran pre-commit to handle lint / formatting issues.
  • Read the contributor guideline,
    Pull Request section?
  • Updated relevant documentation.
  • Wrote necessary unit or integration tests.

@yanxi0830 yanxi0830 marked this pull request as ready for review January 31, 2025 01:17
@yanxi0830 yanxi0830 changed the title Sync updates from stainless branch: yanxi0830/dev Fix streaming/non-streaming return type differences Jan 31, 2025
@ashwinb
Copy link
Contributor

ashwinb commented Jan 31, 2025

Can you also run the agents tests once?

yanxi0830 added a commit to llamastack/llama-stack that referenced this pull request Jan 31, 2025
# What does this PR do?

We need to change

```yaml
/v1/inference/chat-completion:
    post:
      responses:
        '200':
          description: >-
            If stream=False, returns a ChatCompletionResponse with the full completion.
            If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
          content:
            text/event-stream:
              schema:
                oneOf:
                  - $ref: '#/components/schemas/ChatCompletionResponse'
                  - $ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
```

into

```yaml
/v1/inference/chat-completion:
    post:
      responses:
        '200':
          description: >-
            If stream=False, returns a ChatCompletionResponse with the full completion.
            If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
          content:
            text/event-stream:
              schema:
                $ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
            application/json:
              schema:
                $ref: '#/components/schemas/ChatCompletionResponse'
```

## Test Plan

**Python**
- tested in SDK sync:
llamastack/llama-stack-client-python#108

**Node**
- tested w/
https://gist.github.com/yanxi0830/b782f4b91e21dcccdfef8898ce55157e (SDK
udpate follow up)


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
@yanxi0830 yanxi0830 merged commit 0f5c50d into main Jan 31, 2025
2 checks passed
@yanxi0830 yanxi0830 deleted the stream-non-stream branch January 31, 2025 02:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants