Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example for llama.cpp #174

Merged
merged 2 commits into from
Jul 23, 2024

Conversation

justinsb
Copy link
Contributor

@justinsb justinsb commented Jul 20, 2024

In the first commit we just bring up llama.cpp, not really using LWS.

In the second commit we really use LeaderWorkerSet, leveraging llama.cpp's RPC support

@k8s-ci-robot k8s-ci-robot requested review from ahg-g and liurupeng July 20, 2024 13:49
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jul 20, 2024
@justinsb justinsb force-pushed the examples_llamacpp branch from a3b66d4 to 333d338 Compare July 20, 2024 18:53
@liurupeng
Copy link
Collaborator

thanks @justinsb for the contribution! could you add a readme for this example like this? (https://github.com/kubernetes-sigs/lws/blob/main/docs/examples/vllm/README.md)

justinsb added 2 commits July 23, 2024 13:28
Not (yet) using the leader functionality, just going direct to a worker.
Previously we weren't actually running on multiple pods; now we are.
@justinsb justinsb force-pushed the examples_llamacpp branch from 333d338 to 3565efe Compare July 23, 2024 17:28
# GGML_RPC=ON: Builds RPC support
# BUILD_SHARED_LIBS=OFF: Don't rely on shared libraries like libggml
RUN cmake . -DGGML_RPC=ON -DBUILD_SHARED_LIBS=OFF -DGGML_DEBUG=1
RUN cmake --build . --config Release --parallel 8
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the parallel here "tensor parallelism" or "pipeline parallelism"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just running the cmake build in parallel. Just a slightly faster docker build, no runtime effect :-)

llama.cpp began as a project to support CPU-only inference on a single node, but has
since expanded to support accelerators and distributed inference.

l.md)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this added accidentally?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll remove ... I'm thinking I should add support for GPUs also next, so maybe I'll do that at the same time!

@liurupeng
Copy link
Collaborator

thanks @justinsb! very nice example for using CPU for multi node inference!

@liurupeng
Copy link
Collaborator

/approve
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 23, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: justinsb, liurupeng

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 23, 2024
@k8s-ci-robot k8s-ci-robot merged commit c8ae17b into kubernetes-sigs:main Jul 23, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants