Skip to content

Commit 857981b

Browse files
committed
remove rotten scripts
1 parent c8302cd commit 857981b

File tree

5 files changed

+21
-244
lines changed

5 files changed

+21
-244
lines changed

README.md

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ pip install llama-stack-client
2525

2626
## Usage
2727

28-
The full API of this library can be found in [api.md](api.md). You may find basic client examples in the [/examples](./examples/) folder.
28+
The full API of this library can be found in [api.md](api.md). You may find basic client examples in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repo.
2929

3030
```python
3131
from llama_stack_client import LlamaStackClient
@@ -42,12 +42,31 @@ response = client.inference.chat_completion(
4242
role="user",
4343
),
4444
],
45-
model="Llama3.1-8B-Instruct",
45+
model="meta-llama/Llama-3.2-3B-Instruct",
4646
stream=False,
4747
)
4848
print(response)
4949
```
5050

51+
After installing the `llama-stack-client` package, you can also use the [`llama-stack-client` CLI](https://github.com/meta-llama/llama-stack/tree/main/llama-stack-client) to interact with the Llama Stack server.
52+
```bash
53+
llama-stack-client inference chat-completion --message "hello, what model are you"
54+
```
55+
56+
```python
57+
ChatCompletionResponse(
58+
completion_message=CompletionMessage(
59+
content="Hello! I'm an AI model, and I'm based on a large language model architecture. My knowledge cutoff is December 2023, which means I was trained on a dataset that was current up to that point in time.\n\nI don't have a specific model name, but I'm similar to other
60+
conversational AI models like LLaMA, Bard, or ChatGPT. My primary function is to understand and respond to human language, generating human-like text based on the input I receive.\n\nI'm designed to be helpful and informative, and I can assist with a wide range of topics and tasks,
61+
from answering questions and providing information to generating text and completing tasks. How can I help you today?",
62+
role='assistant',
63+
stop_reason='end_of_turn',
64+
tool_calls=[]
65+
),
66+
logprobs=None
67+
)
68+
```
69+
5170
## Async usage
5271

5372
Simply import `AsyncLlamaStackClient` instead of `LlamaStackClient` and use `await` with each API call:

examples/README.md

Lines changed: 0 additions & 15 deletions
This file was deleted.

examples/inference/client.py

Lines changed: 0 additions & 53 deletions
This file was deleted.

examples/memory/client.py

Lines changed: 0 additions & 131 deletions
This file was deleted.

examples/safety/client.py

Lines changed: 0 additions & 43 deletions
This file was deleted.

0 commit comments

Comments
 (0)