You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cortex.cpp's desktop focus means Drogon's features are unused
We should contribute our vision and multimodal work upstream as a form of llama.cpp server
Can we consider refactoring llamacpp-engine to use the server implementation, and maintain a fork with our improvements to speech, vision etc? This is especially if we do a C++ implementation of whisperVQ in the future.
The text was updated successfully, but these errors were encountered:
I agree that we should align with the llama.cpp upstream, but I have several concerns:
Drogon is part of cortex.cpp, we have already removed it from llama-cpp engine. If we remove Drogon from cortex.cpp, we need to find a replacement, which will be costly.
Repository Structure: Forking the server implementation will necessitate changes to our repository structure, since we currently use llama.cpp as a submodule.
Our current version differs significantly from the upstream version, which will require considerable time for refactoring.
Goal
Can we consider refactoring llamacpp-engine to use the server implementation, and maintain a fork with our improvements to speech, vision etc? This is especially if we do a C++ implementation of whisperVQ in the future.
The text was updated successfully, but these errors were encountered: