You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We propose our bce-reranker-base_v1 for reranking long passages (each passage < 32k tokens) by our python package BCEmbedding. You can install it simply by pip install BCEmbedding.
The usage for reranking long can be checked in "https://github.com/netease-youdao/BCEmbedding?tab=readme-ov-file#1-based-on-bcembedding". We must mention that our bce-reranker-base_v1 just supports max length for 512, and the method for reranking long passages is open-source (see "NOTE" in the url above), which is a good balance between efficiency and effectivity (also adopted by other projects).
Hi,
using huggingface/text-embeddings-inference I have deploy bce-reranker-base_v1 model.
In the info endpoint and in your provided example I can see you are setting max_length=512
could you confirm this model support only 512 tokens only or Is there any way to process long text without truncate ?
The text was updated successfully, but these errors were encountered: