We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent e77dc4b commit ef1dd68Copy full SHA for ef1dd68
docs/deployment/k8s.md
@@ -9,6 +9,7 @@ Deploying vLLM on Kubernetes is a scalable and efficient way to serve machine le
9
* [Deployment with GPUs](#deployment-with-gpus)
10
11
Alternatively, you can deploy vLLM to Kubernetes using any of the following:
12
+
13
* [Helm](frameworks/helm.md)
14
* [InftyAI/llmaz](integrations/llmaz.md)
15
* [KServe](integrations/kserve.md)
0 commit comments