@@ -288,10 +288,10 @@ def create(
288
288
289
289
truncation: The truncation strategy to use for the model response.
290
290
291
- - `auto`: If the context of this response and previous ones exceeds the model's
292
- context window size, the model will truncate the response to fit the context
293
- window by dropping input items in the middle of the conversation.
294
- - `disabled` (default): If a model response will exceed the context window size
291
+ - `auto`: If the input to this Response exceeds the model's context window size,
292
+ the model will truncate the response to fit the context window by dropping
293
+ items from the beginning of the conversation.
294
+ - `disabled` (default): If the input size will exceed the context window size
295
295
for a model, the request will fail with a 400 error.
296
296
297
297
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -527,10 +527,10 @@ def create(
527
527
528
528
truncation: The truncation strategy to use for the model response.
529
529
530
- - `auto`: If the context of this response and previous ones exceeds the model's
531
- context window size, the model will truncate the response to fit the context
532
- window by dropping input items in the middle of the conversation.
533
- - `disabled` (default): If a model response will exceed the context window size
530
+ - `auto`: If the input to this Response exceeds the model's context window size,
531
+ the model will truncate the response to fit the context window by dropping
532
+ items from the beginning of the conversation.
533
+ - `disabled` (default): If the input size will exceed the context window size
534
534
for a model, the request will fail with a 400 error.
535
535
536
536
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -766,10 +766,10 @@ def create(
766
766
767
767
truncation: The truncation strategy to use for the model response.
768
768
769
- - `auto`: If the context of this response and previous ones exceeds the model's
770
- context window size, the model will truncate the response to fit the context
771
- window by dropping input items in the middle of the conversation.
772
- - `disabled` (default): If a model response will exceed the context window size
769
+ - `auto`: If the input to this Response exceeds the model's context window size,
770
+ the model will truncate the response to fit the context window by dropping
771
+ items from the beginning of the conversation.
772
+ - `disabled` (default): If the input size will exceed the context window size
773
773
for a model, the request will fail with a 400 error.
774
774
775
775
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1719,10 +1719,10 @@ async def create(
1719
1719
1720
1720
truncation: The truncation strategy to use for the model response.
1721
1721
1722
- - `auto`: If the context of this response and previous ones exceeds the model's
1723
- context window size, the model will truncate the response to fit the context
1724
- window by dropping input items in the middle of the conversation.
1725
- - `disabled` (default): If a model response will exceed the context window size
1722
+ - `auto`: If the input to this Response exceeds the model's context window size,
1723
+ the model will truncate the response to fit the context window by dropping
1724
+ items from the beginning of the conversation.
1725
+ - `disabled` (default): If the input size will exceed the context window size
1726
1726
for a model, the request will fail with a 400 error.
1727
1727
1728
1728
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1958,10 +1958,10 @@ async def create(
1958
1958
1959
1959
truncation: The truncation strategy to use for the model response.
1960
1960
1961
- - `auto`: If the context of this response and previous ones exceeds the model's
1962
- context window size, the model will truncate the response to fit the context
1963
- window by dropping input items in the middle of the conversation.
1964
- - `disabled` (default): If a model response will exceed the context window size
1961
+ - `auto`: If the input to this Response exceeds the model's context window size,
1962
+ the model will truncate the response to fit the context window by dropping
1963
+ items from the beginning of the conversation.
1964
+ - `disabled` (default): If the input size will exceed the context window size
1965
1965
for a model, the request will fail with a 400 error.
1966
1966
1967
1967
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -2197,10 +2197,10 @@ async def create(
2197
2197
2198
2198
truncation: The truncation strategy to use for the model response.
2199
2199
2200
- - `auto`: If the context of this response and previous ones exceeds the model's
2201
- context window size, the model will truncate the response to fit the context
2202
- window by dropping input items in the middle of the conversation.
2203
- - `disabled` (default): If a model response will exceed the context window size
2200
+ - `auto`: If the input to this Response exceeds the model's context window size,
2201
+ the model will truncate the response to fit the context window by dropping
2202
+ items from the beginning of the conversation.
2203
+ - `disabled` (default): If the input size will exceed the context window size
2204
2204
for a model, the request will fail with a 400 error.
2205
2205
2206
2206
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
0 commit comments