You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's look like the llm especially temperature set into the settings are not used in : get_response_synthesizer
from llama_index.core import get_response_synthesizer, DocumentSummaryIndex, PromptTemplate
from .templates import load_prompts
from llama_index.core import Settings
def give_summarizer(data):
prompts = load_prompts()
if not prompts:
return None
bacoco
changed the title
[Bug]: temperature and other llm argument used in tre_summarize
[Bug]: temperature and other llm argument used in tree_summarize
Feb 19, 2025
@bacoco Hello! I'm here to help you with any bugs, questions, or contributions. Let me know how I can assist you.
To ensure that the temperature setting is used in the get_response_synthesizer function for deterministic results, you need to make sure that the language model (LLM) is correctly configured with a temperature of 0. Here's how you can do it:
Initialize the LLM with Temperature 0: When you create your custom LLM, ensure that the temperature parameter is set to 0. This is already done in your initialize_model function:
Pass the LLM to get_response_synthesizer: When calling get_response_synthesizer, ensure that the LLM is passed correctly. It seems there might be a syntax issue in your code. You should pass the LLM as a keyword argument without the ** operator:
response_synthesizer=get_response_synthesizer(
response_mode="tree_summarize",
use_async=True,
llm=Settings.llm, # Ensure this is correctly setstructured_answer_filtering=False
)
Ensure Settings.llm is Set Correctly: Before using Settings.llm, make sure it is set to your custom LLM:
By following these steps, you should be able to achieve deterministic results with the temperature set to 0 in the get_response_synthesizer function [1].
Bug Description
It's look like the llm especially temperature set into the settings are not used in : get_response_synthesizer
from llama_index.core import get_response_synthesizer, DocumentSummaryIndex, PromptTemplate
from .templates import load_prompts
from llama_index.core import Settings
def give_summarizer(data):
prompts = load_prompts()
if not prompts:
return None
i must put : get_response_synthesizer
to have deterministic result with temperature set to 0 (using mistral small 3)
Version
0.12.12
Steps to Reproduce
def initialize_model(api_gpu=True):
SERVER_URL, api_key, MODEL = initialize_model_variable(api_gpu=api_gpu)
custom_llm = OpenAILike(
model=MODEL,
api_key=api_key,
api_base=SERVER_URL,
is_chat_model=True,
temperature=0,
random_seed=42, # Added for reproducibility
repetition_penalty=1.07 # Added for better deterministic results
)
embed_model = HuggingFaceEmbedding(model_name="dangvantuan/french-document-embedding", trust_remote_code=True)
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: