Skip to content

Commit 35b93a6

Browse files
z80maniachodlen
authored andcommitted
server : allow to get default generation settings for completion (ggml-org#5307)
1 parent 7bfe882 commit 35b93a6

File tree

2 files changed

+21
-2
lines changed

2 files changed

+21
-2
lines changed

examples/server/README.md

+15-1
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,21 @@ Notice that each `probs` is an array of length `n_probs`.
264264

265265
It also accepts all the options of `/completion` except `stream` and `prompt`.
266266

267-
- **GET** `/props`: Return the required assistant name and anti-prompt to generate the prompt in case you have specified a system prompt for all slots.
267+
- **GET** `/props`: Return current server settings.
268+
269+
### Result JSON
270+
271+
```json
272+
{
273+
"assistant_name": "",
274+
"user_name": "",
275+
"default_generation_settings": { ... }
276+
}
277+
```
278+
279+
- `assistant_name` - the required assistant name to generate the prompt in case you have specified a system prompt for all slots.
280+
- `user_name` - the required anti-prompt to generate the prompt in case you have specified a system prompt for all slots.
281+
- `default_generation_settings` - the default generation settings for the `/completion` endpoint, has the same fields as the `generation_settings` response object from the `/completion` endpoint.
268282

269283
- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only ChatML-tuned models, such as Dolphin, OpenOrca, OpenHermes, OpenChat-3.5, etc can be used with this endpoint. Compared to `api_like_OAI.py` this API implementation does not require a wrapper to be served.
270284

examples/server/server.cpp

+6-1
Original file line numberDiff line numberDiff line change
@@ -334,6 +334,7 @@ struct llama_server_context
334334

335335
// slots / clients
336336
std::vector<llama_client_slot> slots;
337+
json default_generation_settings_for_props;
337338

338339
llama_server_queue queue_tasks;
339340
llama_server_response queue_results;
@@ -430,6 +431,9 @@ struct llama_server_context
430431
slots.push_back(slot);
431432
}
432433

434+
default_generation_settings_for_props = get_formated_generation(slots.front());
435+
default_generation_settings_for_props["seed"] = -1;
436+
433437
batch = llama_batch_init(n_ctx, 0, params.n_parallel);
434438

435439
// empty system prompt
@@ -2614,7 +2618,8 @@ int main(int argc, char **argv)
26142618
res.set_header("Access-Control-Allow-Origin", req.get_header_value("Origin"));
26152619
json data = {
26162620
{ "user_name", llama.name_user.c_str() },
2617-
{ "assistant_name", llama.name_assistant.c_str() }
2621+
{ "assistant_name", llama.name_assistant.c_str() },
2622+
{ "default_generation_settings", llama.default_generation_settings_for_props }
26182623
};
26192624
res.set_content(data.dump(), "application/json; charset=utf-8");
26202625
});

0 commit comments

Comments
 (0)