-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MTEB: Massive Text Embedding Benchmark #627
Comments
Related issues#455: Embeddings - OpenAI API### DetailsSimilarity score: 0.86 - [ ] [Embeddings - OpenAI API](https://platform.openai.com/docs/guides/embeddings)EmbeddingsLearn how to turn text into numbers, unlocking use cases like search. New embedding modelsOur newest and most performant embedding models, Suggested labelsnull#552: LargeWorldModel/LWM-Text-Chat-1M · Hugging Face### DetailsSimilarity score: 0.86 - [ ] [LargeWorldModel/LWM-Text-Chat-1M · Hugging Face](https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M)LargeWorldModel/LWM-Text-Chat-1M · Hugging FaceDESCRIPTION: LWM-Text-1M-Chat Model Card Model details Model type: LWM-Text-1M-Chat is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture. Model date: LWM-Text-1M-Chat was trained in December 2023. Paper or resources for more information: https://largeworldmodel.github.io/ URL: https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M Suggested labels{'label-name': 'Open-source Models', 'label-description': 'Models that are publicly available and open-source for usage and exploration.', 'gh-repo': 'huggingfaceco/LargeWorldModel/LWM-Text-Chat-1M', 'confidence': 56.11}#625: unsloth/README.md at main · unslothai/unsloth### DetailsSimilarity score: 0.86 - [ ] [unsloth/README.md at main · unslothai/unsloth](https://github.com/unslothai/unsloth/blob/main/README.md?plain=1)unsloth/README.md at main · unslothai/unsloth✨ Finetune for FreeAll notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
🦥 Unsloth.ai News
🔗 Links and Resources
⭐ Key Features
🥇 Performance Benchmarking
Suggested labels#386: SciPhi/AgentSearch-V1 · Datasets at Hugging Face### DetailsSimilarity score: 0.85 - [ ] [SciPhi/AgentSearch-V1 · Datasets at Hugging Face](https://huggingface.co/datasets/SciPhi/AgentSearch-V1)Getting StartedThe AgentSearch-V1 dataset is a comprehensive collection of over one billion embeddings, produced using jina-v2-base. It includes more than 50 million high-quality documents and over 1 billion passages, covering a vast range of content from sources such as Arxiv, Wikipedia, Project Gutenberg, and includes carefully filtered Creative Commons (CC) data. Our team is dedicated to continuously expanding and enhancing this corpus to improve the search experience. We welcome your thoughts and suggestions – please feel free to reach out with your ideas! To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code: from datasets import load_dataset
import json
import numpy as np
# To stream the entire dataset:
ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", streaming=True)
# Optional, stream just the "arxiv" dataset
# ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", data_files="arxiv/*", streaming=True)
# To process the entries:
for entry in ds:
embeddings = np.frombuffer(
entry['embeddings'], dtype=np.float32
).reshape(-1, 768)
text_chunks = json.loads(entry['text_chunks'])
metadata = json.loads(entry['metadata'])
print(f'Embeddings:\n{embeddings}\n\nChunks:\n{text_chunks}\n\nMetadata:\n{metadata}')
break A full set of scripts to recreate the dataset from scratch can be found here. Further, you may check the docs for details on how to perform RAG over AgentSearch. LanguagesEnglish. Dataset StructureThe raw dataset structure is as follows: {
"url": ...,
"title": ...,
"metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "..."},
"text_chunks": ...,
"embeddings": ...,
"dataset": "book" | "arxiv" | "wikipedia" | "stack-exchange" | "open-math" | "RedPajama-Data-V2"
} Dataset CreationThis dataset was created as a step towards making humanities most important knowledge openly searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets. To cite our work, please use the following: @software{SciPhi2023AgentSearch, Source Data@online{wikidump, @misc{paster2023openwebmath, @software{together2023redpajama, LicensePlease refer to the licenses of the data subsets you use.
Suggested labels{ "key": "knowledge-dataset", "value": "A dataset with one billion embeddings from various sources, such as Arxiv, Wikipedia, Project Gutenberg, and carefully filtered Creative Commons data" }#626: classifiers/README.md at main · blockentropy/classifiers### DetailsSimilarity score: 0.85 - [ ] [classifiers/README.md at main · blockentropy/classifiers](https://github.com/blockentropy/classifiers/blob/main/README.md?plain=1)classifiers/README.mdFast Classifiers for Prompt RoutingRouting and controlling the information flow is a core component in optimizing machine learning tasks. While some architectures focus on internal routing of data within a model, we focus on the external routing of data between models. This enables the combination of open source, proprietary, API based, and software based approaches to work together behind a smart router. We investigate three different ways of externally routing the prompt - cosine similarity via embeddings, zero-shot classification, and small classifiers. Implementation of Fast ClassifiersThe We quantize the model using Optimum, enabling the model to run extremely fast on a CPU router. Each classifier takes 5-8ms to run. An ensemble of 8 prompt classifiers takes about 50ms in total. Thus, each endpoint can route about 20 requests per second. In the example Train test split of 80/20 yields an accuracy of 95.49% and f1 score of 0.9227. Comparison vs other Routing methodsThe most popular alternative to routing is via embedding similarity. For example, if one were to try to route a programming question, one might set up the set of target classes as ["coding", "not coding"]. Each one of these strings is then transformed into an embedding and compared against a prompt query like, "write a bubble sort in python". Given the computed pair-wise cosine similarity between the query and class, we can then label the prompt as a coding question and route the prompt to a coding-specific model. These do not scale well with larger numbers of embeddings. Nor are they able to capture non-semantic type classes (like is the response likely to be more or less than 200 tokens). However, they are adaptable and comparably fast and thus provide a good alternative to the trained fast classifiers. Quantifying different methods of routing in terms of execution time. As the prompt size increases, the query time also increases as shown in (a). There is also a close to linear increase in the time as the number of classes increase as shown in (b). However, the small classifiers do not increase in time as the class examples increase in the number of tokens (c). This is due to the upfront cost of training the binary classifier, reducing cost at inference. ReproducibilityThe Suggested labels{'label-name': 'Prompt-Routing', 'label-description': 'Focuses on external routing of data between models to optimize machine learning tasks.', 'confidence': 50.24}#494: Awesome-Efficient-LLM: A curated list for Efficient Large Language Models### DetailsSimilarity score: 0.84 - [ ] [horseee/Awesome-Efficient-LLM: A curated list for Efficient Large Language Models](https://github.com/horseee/Awesome-Efficient-LLM#inference-acceleration)Awesome-Efficient-LLMA curated list for Efficient Large Language Models:
Inference Acceleration
Updates
ContributingIf you'd like to include your paper or need to update any details, please feel free to submit a pull request. You can generate the required markdown format for each paper by filling in the information in Suggested labels{ "label-name": "efficient-llm-acceleration", "description": "Inference acceleration techniques for efficient large language models.", "repo": "horseee/Awesome-Efficient-LLM", "confidence": 70.8 } |
Title: blog/mteb.md at main · huggingface/blog
Description:
"---
title: "MTEB: Massive Text Embedding Benchmark"
thumbnail: /blog/assets/110_mteb/thumbnail.png
authors:
MTEB: Massive Text Embedding Benchmark
MTEB is a massive benchmark for measuring the performance of text embedding models on diverse embedding tasks.
The 🥇 leaderboard provides a holistic view of the best text embedding models out there on a variety of tasks.
The 📝 paper gives background on the tasks and datasets in MTEB and analyzes leaderboard results!
The 💻 Github repo contains the code for benchmarking and submitting any model of your choice to the leaderboard.
Why Text Embeddings?
Text Embeddings are vector representations of text that encode semantic information. As machines require numerical inputs to perform computations, text embeddings are a crucial component of many downstream NLP applications. For example, Google uses text embeddings to power their search engine. Text Embeddings can also be used for finding patterns in large amount of text via clustering or as inputs to text classification models, such as in our recent SetFit work. The quality of text embeddings, however, is highly dependent on the embedding model used. MTEB is designed to help you find the best embedding model out there for a variety of tasks!
MTEB
🐋 Massive: MTEB includes 56 datasets across 8 tasks and currently summarizes >2000 results on the leaderboard.
🌎 Multilingual: MTEB contains up to 112 different languages! We have benchmarked several multilingual models on Bitext Mining, Classification, and STS.
🦚 Extensible: Be it new tasks, datasets, metrics, or leaderboard additions, any contribution is very welcome. Check out the GitHub repository to submit to the leaderboard or solve open issues. We hope you join us on the journey of finding the best text embedding model!
Overview of tasks and datasets in MTEB. Multilingual datasets are marked with a purple shade.
Models
For the initial benchmarking of MTEB, we focused on models claiming state-of-the-art results and popular models on the Hub. This led to a high representation of transformers. 🤖
Models by average English MTEB score (y) vs speed (x) vs embedding size (circle size).
We grouped models into the following three attributes to simplify finding the best model for your task:
🏎 Maximum speed Models like Glove offer high speed, but suffer from a lack of context awareness resulting in low average MTEB scores.
⚖️ Speed and performance Slightly slower, but significantly stronger, all-mpnet-base-v2 or all-MiniLM-L6-v2 provide a good balance between speed and performance.
💪 Maximum performance Multi-billion parameter models like ST5-XXL, GTR-XXL or SGPT-5.8B-msmarco dominate on MTEB. They tend to also produce bigger embeddings like SGPT-5.8B-msmarco which produces 4096 dimensional embeddings requiring more storage!
Model performance varies a lot depending on the task and dataset, so we recommend checking the various tabs of the leaderboard before deciding which model to use!
Benchmark your model
Using the MTEB library, you can benchmark any model that produces embeddings and add its results to the public leaderboard. Let's run through a quick example!
First, install the library:
Next, benchmark a model on a dataset, for example komninos word embeddings on Banking77.
This should produce a
results/average_word_embeddings_komninos/Banking77Classification.json
file!Now you can submit the results to the leaderboard by adding it to the metadata of the
README.md
of any model on the Hub.Run our automatic script to generate the metadata:
The script will produce a
mteb_metadata.md
file that looks like this:--- tags: - mteb model-index: - name: average_word_embeddings_komninos results: - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 66.76623376623377 - type: f1 value: 66.59096432882667 ---
Now add the metadata to the top of a
README.md
of any model on the Hub, like this SGPT-5.8B-msmarco model, and it will show up on the leaderboard after refreshing!Next steps
Go out there and benchmark any model you like! Let us know if you have questions or feedback by opening an issue on our GitHub repo or the leaderboard community tab 🤗
Happy embedding!
"
URL: blog/mteb.md at main · huggingface/blog
Suggested labels
The text was updated successfully, but these errors were encountered: