-
Notifications
You must be signed in to change notification settings - Fork 27
Labels
bugSomething isn't workingSomething isn't workinggood first issueGood for newcomersGood for newcomers
Description
Your current environment
The output of commands above
Your output of commands above
🐛 Describe the bug
https://github.com/vllm-project/vllm/blob/69f064062ba78a0ac44962f55a46a9d79cfb9ce0/vllm/model_executor/models/interfaces_base.py#L113 now requires a model to have get_input_embeddings. Plugins will thus break for now.
The fix for this is to add a wrapper here:
tpu-inference/tpu_inference/models/common/model_loader.py
Lines 413 to 416 in c8c1f09
| # We need a custom __init__ that only calls torch.nn.Module's init, | |
| # to avoid triggering JAX logic when vLLM inspects the class. | |
| def wrapper_init(self, *args, **kwargs): | |
| torch.nn.Module.__init__(self) |
Before submitting a new issue...
- Make sure you already searched for relevant issues and checked the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinggood first issueGood for newcomersGood for newcomers