You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+38Lines changed: 38 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,6 +96,7 @@ Llama Stack Client. It is a library available for Python, Swift, Node.js or
96
96
Kotlin, which "wraps" the REST API stack in a suitable way, which is easier for
97
97
many applications.
98
98
99
+
99
100

100
101
101
102
@@ -123,8 +124,45 @@ user_data_collection:
123
124
transcripts_storage: "/tmp/data/transcripts"
124
125
```
125
126
127
+
### MCP Server and Tool Configuration
128
+
129
+
**Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config.
130
+
131
+
#### Configuring MCP Servers
132
+
133
+
MCP (Model Context Protocol) servers provide tools and capabilities to the AI agents. These are configured in the `mcp_servers` section of your `lightspeed-stack.yaml`:
134
+
135
+
```yaml
136
+
mcp_servers:
137
+
- name: "filesystem-tools"
138
+
provider_id: "model-context-protocol"
139
+
url: "http://localhost:3000"
140
+
- name: "git-tools"
141
+
provider_id: "model-context-protocol"
142
+
url: "http://localhost:3001"
143
+
- name: "database-tools"
144
+
provider_id: "model-context-protocol"
145
+
url: "http://localhost:3002"
146
+
```
147
+
148
+
**Important**: Only MCP servers defined in the `lightspeed-stack.yaml` configuration are available to the agents. Tools configured in the llama-stack `run.yaml` are not accessible to lightspeed-core agents.
149
+
150
+
#### Configuring MCP Headers
151
+
152
+
MCP headers allow you to pass authentication tokens, API keys, or other metadata to MCP servers. These are configured **per request** via the `MCP-HEADERS` HTTP header:
**Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config.
165
+
128
166
To run Llama Stack in separate process, you need to have all dependencies installed. The easiest way how to do it is to create a separate repository with Llama Stack project file `pyproject.toml` and Llama Stack configuration file `run.yaml`. The project file might look like:
0 commit comments