-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Inference] Update DygraphInferencePredictor #9491
[Inference] Update DygraphInferencePredictor #9491
Conversation
Thanks for your contribution! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #9491 +/- ##
===========================================
- Coverage 53.19% 52.84% -0.36%
===========================================
Files 700 699 -1
Lines 110757 110376 -381
===========================================
- Hits 58921 58326 -595
- Misses 51836 52050 +214 ☔ View full report in Codecov by Sentry. |
@@ -719,10 +719,9 @@ def _infer(self, inputs: dict[str, paddle.Tensor]): | |||
inputs[key] = paddle.to_tensor(inputs[key]) | |||
|
|||
inputs["cache_kvs"] = self.cache_kvs | |||
self.model.generate( | |||
return self.model.generate( | |||
**inputs, | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
修复返回None的问题
llm/predict/predictor.py
Outdated
@@ -904,6 +903,8 @@ def _preprocess(self, input_text: list[str]): | |||
input_text = [input_text] if isinstance(input_text, str) else input_text | |||
input_text = [self.tokenizer.apply_chat_template(sentence, tokenize=False) for sentence in input_text] | |||
|
|||
input_text_batch_size = len(input_text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
修复输入数据不是完整batch_size的问题
ca61bec
to
7b83fb1
Compare
PR types
Bug fixes
PR changes
Others
Description
len(input_texts) < batch_size
.