You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
For the past few months I have been hearing awesome things about the OpenHermes 2.5 model as one of the best 7B models that can outperform much larger models, especially in the r/LocalLlama community. Every week there is a new post about its capabilities.
I downloaded the Q2_K model and was very surprised (pleasantly) by its performance. Among all the 7B Q2_K models I have tried, this definitely has the best performance. It also fits inside my 8GB RAM laptop, which is another plus. I'll made a PR to integrate it into our repo.
I find it does summaries quite well, I have asked it about a wide range of topics and it has been ~90% correct on the first response, and can kind of fall apart after going back and forth a few times, but it's only 7b.
I simply tallied the number of thumbs-ups I gave for each first (👍👍👍), second (👍👍), and third place (👍) in all three tests series - which gives us this final ranking:
1st. 👍👍👍👍👍👍👍 OpenHermes-2.5-Mistral-7B with official ChatML format
...
All three models are excellent, they're definitely the three best 7B models so far by far.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe
For the past few months I have been hearing awesome things about the OpenHermes 2.5 model as one of the best 7B models that can outperform much larger models, especially in the r/LocalLlama community. Every week there is a new post about its capabilities.
I downloaded the Q2_K model and was very surprised (pleasantly) by its performance. Among all the 7B Q2_K models I have tried, this definitely has the best performance. It also fits inside my 8GB RAM laptop, which is another plus. I'll made a PR to integrate it into our repo.
Some Sources include:
The text was updated successfully, but these errors were encountered: