Skip to content

Commit 731e3f8

Browse files
committed
fix yapf
Signed-off-by: wangli <wangli858794774@gmail.com>
1 parent 3332104 commit 731e3f8

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

vllm_ascend/worker/model_runner.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -696,12 +696,14 @@ def _compute_lens(self, inter_data: InterDataForSeqGroup, seq_idx: int,
696696

697697
# Compute tokens.
698698
# Fixme: this is for the version compatibility, remove this once vllm v0.8.5 does not be supported.
699-
if not hasattr(seq_data, "prompt_embeds") or seq_data.prompt_embeds is None:
699+
if not hasattr(seq_data,
700+
"prompt_embeds") or seq_data.prompt_embeds is None:
700701
tokens = seq_data.get_token_ids()[context_len:seq_len]
701702
prompt_embeds = None
702703
else:
703704
tokens = [0] * (seq_len - context_len)
704-
prompt_embeds = seq_data.get_token_embeddings()[context_len:seq_len]
705+
prompt_embeds = seq_data.get_token_embeddings(
706+
)[context_len:seq_len]
705707

706708
token_types = seq_group_metadata.token_type_ids
707709

0 commit comments

Comments
 (0)