Skip to content

Commit 2cc494c

Browse files
authored
Merge pull request #390 from gallettilance/docs-mcp-servers
LCORE-226: How to register MCP servers with LCORE
2 parents 7a531cb + 1824419 commit 2cc494c

File tree

1 file changed

+38
-0
lines changed

1 file changed

+38
-0
lines changed

README.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,7 @@ Llama Stack Client. It is a library available for Python, Swift, Node.js or
9696
Kotlin, which "wraps" the REST API stack in a suitable way, which is easier for
9797
many applications.
9898

99+
99100
![Integration with Llama Stack](docs/core2llama-stack_interface.png)
100101

101102

@@ -123,8 +124,45 @@ user_data_collection:
123124
transcripts_storage: "/tmp/data/transcripts"
124125
```
125126
127+
### MCP Server and Tool Configuration
128+
129+
**Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config.
130+
131+
#### Configuring MCP Servers
132+
133+
MCP (Model Context Protocol) servers provide tools and capabilities to the AI agents. These are configured in the `mcp_servers` section of your `lightspeed-stack.yaml`:
134+
135+
```yaml
136+
mcp_servers:
137+
- name: "filesystem-tools"
138+
provider_id: "model-context-protocol"
139+
url: "http://localhost:3000"
140+
- name: "git-tools"
141+
provider_id: "model-context-protocol"
142+
url: "http://localhost:3001"
143+
- name: "database-tools"
144+
provider_id: "model-context-protocol"
145+
url: "http://localhost:3002"
146+
```
147+
148+
**Important**: Only MCP servers defined in the `lightspeed-stack.yaml` configuration are available to the agents. Tools configured in the llama-stack `run.yaml` are not accessible to lightspeed-core agents.
149+
150+
#### Configuring MCP Headers
151+
152+
MCP headers allow you to pass authentication tokens, API keys, or other metadata to MCP servers. These are configured **per request** via the `MCP-HEADERS` HTTP header:
153+
154+
```bash
155+
curl -X POST "http://localhost:8080/v1/query" \
156+
-H "Content-Type: application/json" \
157+
-H "MCP-HEADERS: {\"filesystem-tools\": {\"Authorization\": \"Bearer token123\"}}" \
158+
-d '{"query": "List files in /tmp"}'
159+
```
160+
161+
126162
### Llama Stack project and configuration
127163

164+
**Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config.
165+
128166
To run Llama Stack in separate process, you need to have all dependencies installed. The easiest way how to do it is to create a separate repository with Llama Stack project file `pyproject.toml` and Llama Stack configuration file `run.yaml`. The project file might look like:
129167

130168
```toml

0 commit comments

Comments
 (0)