Skip to content

Commit

Permalink
Fix multi-gpu build
Browse files Browse the repository at this point in the history
  • Loading branch information
masahi committed Jan 31, 2024
1 parent 34c7137 commit b98e1e7
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion mlc_llm/relax_model/llama_batched_vllm.py
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,10 @@ def forward(
seqlen_q,
)

attn_output = nn.emit(reshape(attn_output, hidden_states.struct_info.shape))
attn_output_shape = tuple(hidden_states.struct_info.shape)[:-1] + (
self.num_query_heads * self.head_dim,
)
attn_output = nn.emit(reshape(attn_output, attn_output_shape))
attn_output = self.o_proj(attn_output)

return attn_output, (k_cache, v_cache)
Expand Down

0 comments on commit b98e1e7

Please sign in to comment.