You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Interactive mode (default): Guides you through selecting a provider type (text completion or chat completion) and then shows available providers for that type
114
113
- List mode (`--list`): Simply lists all available providers without interactive selection
115
114
@@ -133,7 +132,7 @@ models:
133
132
engine: deepseek
134
133
model: deepseek-reasoner
135
134
reasoning_config:
136
-
remove_reasoning_traces: True
135
+
remove_thinking_traces: True
137
136
start_token: "<think>"
138
137
end_token: "</think>"
139
138
```
@@ -143,7 +142,7 @@ By removing the traces, the guardrails runtime processes only the actual respons
143
142
144
143
You can specify the following parameters for a reasoning model:
145
144
146
-
-`remove_reasoning_traces`: if the reasoning traces should be ignored (default `True`).
145
+
-`remove_thinking_traces`: if the reasoning traces should be ignored (default `True`).
147
146
-`start_token`: the start token for the reasoning process (default `<think>`).
148
147
-`end_token`: the end token for the reasoning process (default `</think>`).
Copy file name to clipboardExpand all lines: nemoguardrails/rails/llm/config.py
+6-25Lines changed: 6 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -71,13 +71,9 @@
71
71
classReasoningModelConfig(BaseModel):
72
72
"""Configuration for reasoning models/LLMs, including start and end tokens for reasoning traces."""
73
73
74
-
remove_reasoning_traces: Optional[bool] =Field(
75
-
default=True,
76
-
description="For reasoning models (e.g. DeepSeek-r1), if the output parser should remove reasoning traces.",
77
-
)
78
74
remove_thinking_traces: Optional[bool] =Field(
79
-
default=None,
80
-
description="[DEPRECATED] Use remove_reasoning_traces instead. For reasoning models (e.g. DeepSeek-r1), if the output parser should remove thinking traces.",
75
+
default=True,
76
+
description="For reasoning models (e.g. DeepSeek-r1), if the output parser should remove thinking traces.",
81
77
)
82
78
start_token: Optional[str] =Field(
83
79
default="<think>",
@@ -88,21 +84,6 @@ class ReasoningModelConfig(BaseModel):
88
84
description="The end token used for reasoning traces.",
0 commit comments