Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On the edge llama? #1052

Closed
NoNamedCat opened this issue Apr 19, 2023 · 1 comment
Closed

On the edge llama? #1052

NoNamedCat opened this issue Apr 19, 2023 · 1 comment
Labels
hardware Hardware related question Further information is requested

Comments

@NoNamedCat
Copy link

Sorry to ask this... But is possible to get llama.cpp working on things like edge TPU?

https://coral.ai/products/accelerator-module/

@ghost
Copy link

ghost commented Apr 19, 2023

That thing doesn't really have the memory needed for a LLM (8MB on-chip). You would have to transfer data in and out for processing and that would become the bottleneck.

@sw sw added question Further information is requested hardware Hardware related labels Apr 23, 2023
@sw sw closed this as not planned Won't fix, can't repro, duplicate, stale Apr 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hardware Hardware related question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants