You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
***evals:** return partial results when llm function is interrupted ([#1755](https://github.com/Arize-ai/phoenix/issues/1755)) ([1fb0849](https://github.com/Arize-ai/phoenix/commit/1fb0849a4e5f39c6afc90a1417300747a0bf4bf6))
10
+
* LiteLLM model support for evals ([#1675](https://github.com/Arize-ai/phoenix/issues/1675)) ([5f2a999](https://github.com/Arize-ai/phoenix/commit/5f2a9991059e060423853567a20789eba832f65a))
11
+
* sagemaker nobebook support ([#1772](https://github.com/Arize-ai/phoenix/issues/1772)) ([2c0ffbc](https://github.com/Arize-ai/phoenix/commit/2c0ffbc1479ae0255b72bc2d31d5f3204fd8e32c))
12
+
13
+
14
+
### Bug Fixes
15
+
16
+
* unpin llama-index version in tutorial notebooks ([#1766](https://github.com/Arize-ai/phoenix/issues/1766)) ([5ff74e3](https://github.com/Arize-ai/phoenix/commit/5ff74e3895f1b0c5642bd0897dd65e6f2913a7bd))
17
+
18
+
19
+
### Documentation
20
+
21
+
* add instructions for docker build ([#1770](https://github.com/Arize-ai/phoenix/issues/1770)) ([45eb5f2](https://github.com/Arize-ai/phoenix/commit/45eb5f244997d0ff0e991879c297b564e46c9a18))
"""Minimum number of seconds to wait when retrying."""
236
+
max_content_size: Optional[int] =None
237
+
"""If you're using a fine-tuned model, set this to the maximum content size"""
238
+
```
239
+
You can choose among [multiple models](https://docs.litellm.ai/docs/providers) supported by LiteLLM. Make sure you have set the right environment variables set prior to initializing the model. For additional information about the environment variables for specific model providers visit: [LiteLLM provider specific params](https://docs.litellm.ai/docs/completion/input#provider-specific-params)
240
+
241
+
Here is an example of how to initialize `LiteLLMModel` for model "gpt-3.5-turbo":
242
+
243
+
```python
244
+
model = LiteLLMModel(model_name="gpt-3.5-turbo", temperature=0.0)
245
+
model("Hello world, this is a test if you are working?")
246
+
# Output: 'Hello! Yes, I am here and ready to assist you. How can I help you today?'
247
+
```
248
+
249
+
213
250
## **Usage**
214
251
215
252
In this section, we will showcase the methods and properties that our `EvalModels` have. First, instantiate your model from the[#supported-llm-providers](evaluation-models.md#supported-llm-providers"mention"). Once you've instantiated your `model`, you can get responses from the LLM by simply calling the model and passing a text string.
0 commit comments