-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support MiniCPM-V-2.5 #7599
support MiniCPM-V-2.5 #7599
Conversation
sync master
sync master
Does this patch fix it? diff --git a/examples/llava/clip.h b/examples/llava/clip.h
index f028f187..2ff4d399 100644
--- a/examples/llava/clip.h
+++ b/examples/llava/clip.h
@@ -18,8 +18,6 @@
# define CLIP_API
#endif
-struct clip_ctx;
-
#ifdef __cplusplus
extern "C" {
#endif |
This does work and only leaves one now. |
@ggerganov I'm glad that the CI is all green. Can we merge this pr now? |
Grats, awesome to see this progress so much. Thanks for the effort, looking forwarding to see 2.6 |
@tc-mb, This is awesome!!! Hopefully 2.6 is on the way! |
Hi, @ggerganov, I have submitted the PR for MiniCPM-V 2.6. This PR only updates the model, and the CI is all green too. Could you please take a look at it if you are free in the near future? |
@cmp-nct I've been following your contributions about Vision models for a while. Very interested to hear your opinion about MiniCPM-V-2.6 and MiniCPM-V-2.5 versions. |
Did anyone actually try to convert the model with the provided scripts as described in README-minicpmv2.5.md? It looks like there is a problem: #9098 |
Hi, have you tried it on a PC? I think the problem is not with the code logic, but may be caused by cross-compilation. |
* init * rename * add run android for termux in readme * add android readme * add instructions in readme * change name in readme * Update README.md * fixed line * add result in readme * random pos_embed * add positions index * change for ollama * change for ollama * better pos_embed in clip * support ollama * updata cmakelist * updata cmakelist * rename wrapper * clear code * replace and organize code * add link * sync master * fix warnings * fix warnings * fix bug in bicubic resize when need resize iamge smaller * receive review comments and modify * receive review comments and modify * put all code into llava dir * fix quality problem in pr code * change n_layer * add space in "-1" * imitate reshape bug of python code * fix bug in clip * fix issues for merging * fix llama-minicpmv-cli in cmake file * change pr readme * fix code review * remove in line 33 directory in the /cmakelists.txt (not in example, in the main dir * fix cmakefile * add warn * fix KEY_HAS_MINICPMV_PROJ * remove load_image_size into clip_ctx * remove the extern "C", MINICPMV_API * fix uhd code for review comment * delete minicpmv-wrapper in pr * remove uhd_image_embed * Modify 2 notes * clip : style changes * del common.h in clip * fix Type-Check error * fix Type-Check error * fix Type-Check error * fix Type-Check error * fix makefile error * fix ubuntu-make error * try fix clip * try fix 1 --------- Co-authored-by: Hongji Zhu <fireyoucan@gmail.com> Co-authored-by: harvestingmoon <leewenyeong@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Dear llama.cpp Official,
Hi, I'm writing to address our new PR submission for integrating our model MiniCPM-Llama3-V 2.5 into llama.cpp, which has been trending on Huggingface for over a week and has garnered significant user demand. During the previous PR attempt of MiniCPM-V, we identified several critical implementation bugs. The official minicpm-v team has since fixed all these issues, resulting in a performance that matches our PyTorch version. These changes also distinguish our implementation significantly from LLaVA example codebase.
Here are some key differences and improvements we've made:
While some aspects of our implementation may appear similar to LLaVA example codebase, these distinct features and optimizations set our model apart. We can reference LLaVA for the overlapping components to maintain code integrity, but this might compromise the standalone nature of different examples, akin to how Huggingface Transformers ensures each model has its unique implementation.
Given the extensive user interest and the robust performance of our implementation, merging this model would significantly benefit the community. We are open to collaborating on any adjustments you deem necessary and are committed to ensuring the highest code quality and usability.
Thank you for considering our request. We look forward to your feedback and hope for a positive resolution.
Best regards,
MiniCPM-V Official ^_^