Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix docs #2882

Merged
merged 1 commit into from
Jan 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/Instruction/推理和部署.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ SWIFT支持以命令行、Python代码和界面方式进行推理和部署:
- `single-line`命令 切换到单行模式
- `clear`命令 清除history
- `exit`命令 退出
- 如果query中带有多模态数据,添加<image>/<video>/<audio>等标签,例如输入`<image>What is in the image?`,即可在接下来输入图片地址
- 如果query中带有多模态数据,添加`<image>/<video>/<audio>`等标签,例如输入`<image>What is in the image?`,即可在接下来输入图片地址

## 推理加速后端

Expand Down
2 changes: 1 addition & 1 deletion docs/source_en/Instruction/Inference-and-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The command line inference can be referred to via the link provided in the secon
- The `single-line` command switches to single-line mode.
- The `clear` command clears the history.
- The `exit` command exits the application.
If the query involves multimodal data, add tags like <image>/<video>/<audio>. For example, input `<image>What is in the image?`, and you can then input the image address.
If the query involves multimodal data, add tags like `<image>/<video>/<audio>`. For example, input `<image>What is in the image?`, and you can then input the image address.

## Inference Acceleration Backend
You can perform inference and deployment using `swift infer/deploy`. Currently, SWIFT supports three inference frameworks: pt (native torch), vLLM, and LMDeploy. You can switch between them using `--infer_backend pt/vllm/lmdeploy`. Apart from pt, both vLLM and LMDeploy have their own model support ranges. Please refer to their official documentation to verify availability and prevent runtime errors.
Loading