Skip to content

Commit

Permalink
add self.head_dim for VisionAttention in Qwen2-VL (huggingface#33211)
Browse files Browse the repository at this point in the history
* add self.head_dim for VisionAttention in Qwen2-VL

* add self.head_dim for VisionAttention in Qwen2-VL

* fix ci

* black the test_modeling_qwen2_vl.py

* use ruff to format test_modeling_qwen2_vl.py

* [run-slow] qwen2_vl

* use tying for python3.8

* fix the import format

* use ruff to fix the ci error I001

* [run-slow] qwen2_vl

* remove unused import

* commit for rebase

* use ruff fix ci

* [run-slow] qwen2_vl

---------

Co-authored-by: root <liji>
  • Loading branch information
GeLee-Q authored and BernardZach committed Dec 5, 2024
1 parent 1e10c00 commit 4c5fa94
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion tests/models/qwen2_vl/test_modeling_qwen2_vl.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,9 @@ def prepare_config_and_inputs_for_common(self):
attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=torch_device)
input_ids[:, torch.arange(vision_seqlen, device=torch_device) + 1] = self.image_token_id
labels = torch.zeros(
(self.batch_size, self.seq_length - 1 + vision_seqlen), dtype=torch.long, device=torch_device
(self.batch_size, self.seq_length - 1 + vision_seqlen),
dtype=torch.long,
device=torch_device,
)
patch_size = self.vision_config["patch_size"]
inputs_dict = {
Expand Down

0 comments on commit 4c5fa94

Please sign in to comment.