Skip to content
This repository has been archived by the owner on Oct 10, 2024. It is now read-only.

Commit

Permalink
refactor: move signal from data to options
Browse files Browse the repository at this point in the history
  • Loading branch information
sublimator committed May 6, 2024
1 parent f7ae679 commit 4b70df4
Showing 1 changed file with 49 additions and 33 deletions.
82 changes: 49 additions & 33 deletions src/client.js
Original file line number Diff line number Diff line change
Expand Up @@ -233,21 +233,30 @@ class MistralClient {
};

/**
* A chat endpoint without streaming
* @param {*} model the name of the model to chat with, e.g. mistral-tiny
* @param {*} messages an array of messages to chat with, e.g.
* [{role: 'user', content: 'What is the best French cheese?'}]
* @param {*} tools a list of tools to use.
* @param {*} temperature the temperature to use for sampling, e.g. 0.5
* @param {*} maxTokens the maximum number of tokens to generate, e.g. 100
* @param {*} topP the cumulative probability of tokens to generate, e.g. 0.9
* @param {*} randomSeed the random seed to use for sampling, e.g. 42
* @param {*} safeMode deprecated use safePrompt instead
* @param {*} safePrompt whether to use safe mode, e.g. true
* @param {*} toolChoice the tool to use, e.g. 'auto'
* @param {*} responseFormat the format of the response, e.g. 'json_format'
* @param {*} [signal] - optional AbortSignal instance to control request
* The signal will be combined with default timeout signal
* A chat endpoint without streaming.
*
* @param {Object} data - The main chat configuration.
* @param {*} data.model - the name of the model to chat with,
* e.g. mistral-tiny
* @param {*} data.messages - an array of messages to chat with, e.g.
* [{role: 'user', content: 'What is the best
* French cheese?'}]
* @param {*} data.tools - a list of tools to use.
* @param {*} data.temperature - the temperature to use for sampling, e.g. 0.5
* @param {*} data.maxTokens - the maximum number of tokens to generate,
* e.g. 100
* @param {*} data.topP - the cumulative probability of tokens to generate,
* e.g. 0.9
* @param {*} data.randomSeed - the random seed to use for sampling, e.g. 42
* @param {*} data.safeMode - deprecated use safePrompt instead
* @param {*} data.safePrompt - whether to use safe mode, e.g. true
* @param {*} data.toolChoice - the tool to use, e.g. 'auto'
* @param {*} data.responseFormat - the format of the response,
* e.g. 'json_format'
* @param {Object} options - Additional operational options.
* @param {*} [options.signal] - optional AbortSignal instance to control
* request The signal will be combined with
* default timeout signal
* @return {Promise<Object>}
*/
chat = async function({
Expand All @@ -260,10 +269,9 @@ class MistralClient {
randomSeed,
safeMode,
safePrompt,
signal,
toolChoice,
responseFormat,
}) {
}, {signal}) {
const request = this._makeChatCompletionRequest(
model,
messages,
Expand All @@ -289,20 +297,29 @@ class MistralClient {

/**
* A chat endpoint that streams responses.
* @param {*} model the name of the model to chat with, e.g. mistral-tiny
* @param {*} messages an array of messages to chat with, e.g.
* [{role: 'user', content: 'What is the best French cheese?'}]
* @param {*} tools a list of tools to use.
* @param {*} temperature the temperature to use for sampling, e.g. 0.5
* @param {*} maxTokens the maximum number of tokens to generate, e.g. 100
* @param {*} topP the cumulative probability of tokens to generate, e.g. 0.9
* @param {*} randomSeed the random seed to use for sampling, e.g. 42
* @param {*} safeMode deprecated use safePrompt instead
* @param {*} safePrompt whether to use safe mode, e.g. true
* @param {*} toolChoice the tool to use, e.g. 'auto'
* @param {*} responseFormat the format of the response, e.g. 'json_format'
* @param {*} [signal] - optional AbortSignal instance to control request
* The signal will be combined with default timeout signal
*
* @param {Object} data - The main chat configuration.
* @param {*} data.model - the name of the model to chat with,
* e.g. mistral-tiny
* @param {*} data.messages - an array of messages to chat with, e.g.
* [{role: 'user', content: 'What is the best
* French cheese?'}]
* @param {*} data.tools - a list of tools to use.
* @param {*} data.temperature - the temperature to use for sampling, e.g. 0.5
* @param {*} data.maxTokens - the maximum number of tokens to generate,
* e.g. 100
* @param {*} data.topP - the cumulative probability of tokens to generate,
* e.g. 0.9
* @param {*} data.randomSeed - the random seed to use for sampling, e.g. 42
* @param {*} data.safeMode - deprecated use safePrompt instead
* @param {*} data.safePrompt - whether to use safe mode, e.g. true
* @param {*} data.toolChoice - the tool to use, e.g. 'auto'
* @param {*} data.responseFormat - the format of the response,
* e.g. 'json_format'
* @param {Object} options - Additional operational options.
* @param {*} [options.signal] - optional AbortSignal instance to control
* request The signal will be combined with
* default timeout signal
* @return {Promise<Object>}
*/
chatStream = async function* ({
Expand All @@ -315,10 +332,9 @@ class MistralClient {
randomSeed,
safeMode,
safePrompt,
signal,
toolChoice,
responseFormat,
}) {
}, {signal}) {
const request = this._makeChatCompletionRequest(
model,
messages,
Expand Down

0 comments on commit 4b70df4

Please sign in to comment.