-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed generation args issue affection OpenAI completion model #1458
Fixed generation args issue affection OpenAI completion model #1458
Conversation
Thanks for the bug report and fix! Could we also add a copy prior to the popping of attributes from
|
@haileyschoelkopf How about just using |
Something like: self._max_gen_toks = request_args.get("max_gen_toks", self.max_gen_toks)
for context, _ in chunk:
context_enc = self.tok_encode(context)
inp = context_enc[-(self.max_length - self.max_gen_toks) :]
inps.append(inp)
until = request_args.get("until", ["<|endoftext|>"])
request_args["temperature"] = request_args.get("temperature", 0)
response = oa_completion(
client=self.client,
model=self.model,
prompt=inps,
max_tokens=self.max_gen_toks,
stop=until,
seed=self.seed,
**{k: v for k, v in request_args.items() if k not in ["do_sample", "max_gen_toks"]},
) |
@Am1n3e Sure, that works for me! |
Per @baberabb --test failures are due to the no-copy behavior being relied upon here: https://github.com/Am1n3e/lm-evaluation-harness-ae/blob/5c4e0aa7e4802191c60ec20862ad59d16e702457/tests/models/test_huggingface.py#L25C1-L26C71 changing the order of these two lines in the test should make it safe to do this copy where you've introduced it! |
@haileyschoelkopf I've addressed the comments. |
Thank you again! |
…erAI#1458) * Fixed generation args issue affection openai completion model * Fixed hf unit test; removed pop attributes in OpenAi completion. * fix format * fix format --------- Co-authored-by: Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
…erAI#1458) * Fixed generation args issue affection openai completion model * Fixed hf unit test; removed pop attributes in OpenAi completion. * fix format * fix format --------- Co-authored-by: Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
…erAI#1458) * Fixed generation args issue affection openai completion model * Fixed hf unit test; removed pop attributes in OpenAi completion. * fix format * fix format --------- Co-authored-by: Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
The task's requests use the generation kwargs as request args when the task type is
generate_until
as shown belowlm-evaluation-harness/lm_eval/api/task.py
Line 1067 in a72babb
Then, in the OpenAI Completion model, the
until
attribute is poped from the request args:lm-evaluation-harness/lm_eval/models/openai_completions.py
Line 270 in a72babb
Since the object (
self.config.generation_kwargs
) is mutable, once the attribute is poped on the next request it will not be found and default in this case to["<|endoftext|>"]
instead to what every value was inuntil
(provided from the task config file or the--gen_kwargs
argument.This means that only the first request will have the correct
until
value and any subsequent request will use the default.