From 637c099e34ff01b7f20d80e0ffba19a0e6c096d8 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Thu, 13 Nov 2025 19:37:29 +0100 Subject: [PATCH 1/8] index --- docs/hub/index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/hub/index.md b/docs/hub/index.md index 8da2a01b3..befbdca3c 100644 --- a/docs/hub/index.md +++ b/docs/hub/index.md @@ -61,7 +61,10 @@ The Hugging Face Hub is a platform with over 2M models, 500k datasets, and 1M de Gated Datasets Uploading Datasets Downloading Datasets +Streaming Datasets +Editing Datasets Libraries +Connectors Dataset Viewer Download Stats Data files Configuration From 9a13dab432b965ae0778b0873c7f834eed0122e9 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Thu, 13 Nov 2025 19:37:46 +0100 Subject: [PATCH 2/8] wip --- docs/hub/datasets-connectors.md | 2 ++ docs/hub/datasets-editing.md | 2 ++ 2 files changed, 4 insertions(+) create mode 100644 docs/hub/datasets-connectors.md create mode 100644 docs/hub/datasets-editing.md diff --git a/docs/hub/datasets-connectors.md b/docs/hub/datasets-connectors.md new file mode 100644 index 000000000..0135fb555 --- /dev/null +++ b/docs/hub/datasets-connectors.md @@ -0,0 +1,2 @@ +# Datasets connectors + diff --git a/docs/hub/datasets-editing.md b/docs/hub/datasets-editing.md new file mode 100644 index 000000000..90520aae3 --- /dev/null +++ b/docs/hub/datasets-editing.md @@ -0,0 +1,2 @@ +# Datasets editing + From a607ad28ca697fa7537f5497384b56b03d7d4567 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Thu, 13 Nov 2025 19:37:53 +0100 Subject: [PATCH 3/8] update libraries page --- docs/hub/datasets-libraries.md | 52 ++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 12 deletions(-) diff --git a/docs/hub/datasets-libraries.md b/docs/hub/datasets-libraries.md index 8610aa05d..253da620b 100644 --- a/docs/hub/datasets-libraries.md +++ b/docs/hub/datasets-libraries.md @@ -4,24 +4,52 @@ The Datasets Hub has support for several libraries in the Open Source ecosystem. Thanks to the [huggingface_hub Python library](/docs/huggingface_hub), it's easy to enable sharing your datasets on the Hub. We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. +## Libraries table + The table below summarizes the supported libraries and their level of integration. | Library | Description | Download from Hub | Push to Hub | | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------- | | [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ✅ | -| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ | ✅ | -| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ | ✅ | -| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ | ✅ | +| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ +s | ✅ +s +p | +| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ +s | ✅ +s +p* | +| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ +s | ✅ +s +p | | [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ✅ | -| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ | ✅ | -| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ | ❌ | -| [fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ | ❌ | -| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ | ✅ | -| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ | -| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ | ✅ | -| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ | ✅ | -| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ | ✅ | -| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ | ❌ | +| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ +s | ❌ | +| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ +s | ❌ | +| [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ +s | ❌ | +| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ +s | ✅ | +| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ +p* | +| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ +s | ✅ | +| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ +s | ✅ +p* | +| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ +s | ✅ +s +p | +| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ +s | ❌ | + +_+s: Supports Streaming_ +_+p: Writes optimized Parquet files_ +_+p*: Requires passing extra arguments to write optimized Parquet files_ + +### Streaming + +Dataset streaming allows to iterate on a dataset on Hugging Face progressively without having to download it completely. +It saves disk space and download time. + +In addition to streaming from Hugging Face, many libraries also support streaming when writing back to Hugging Face. +Therefore they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the downloads, uploads and processing steps. + +For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself. + +### Optimized Parquet files + +Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing: + +* [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage based on Git. It accelereates uploads and downloads thanks to deduplication and allows efficient file editing +* Page index accelerates filters when streaming and enables efficient random access, e.g. in the [Dataset Viewer](https://huggingface.co/docs/dataset-viewer) + +Some libraries require extra argument to write optimized Parquet files: + +* `content_defined_chunking=True` to enable Parquet Content Defined Chunking, for [deduplication](https://huggingface.co/blog/parquet-cdc) and [editing](./datasets-editing) +* `write_page_index=True` to include a page index in the Parquet metadata, for [streaming and random access](./datasets-streaming) ## Integrating data libraries and tools with the Hub From 97fe46607895a4963055d17eb346cf34c285a9e5 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Thu, 13 Nov 2025 19:37:58 +0100 Subject: [PATCH 4/8] datasets streaming --- docs/hub/datasets-streaming.md | 158 +++++++++++++++++++++++++++++++++ 1 file changed, 158 insertions(+) create mode 100644 docs/hub/datasets-streaming.md diff --git a/docs/hub/datasets-streaming.md b/docs/hub/datasets-streaming.md new file mode 100644 index 000000000..da6c8c8d9 --- /dev/null +++ b/docs/hub/datasets-streaming.md @@ -0,0 +1,158 @@ +# Streaming datasets + +## Integrated libraries + +If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below. + +
+ + +
+ +
+ + +
+ +## Using the Hugging Face Client Library + +You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in python, run + +```python +from huggingface_hub import HfFileSystem + +fs = HfFileSystem() + +repo_id = "allenai/c4" +path_in_repo = "en/c4-train.00000-of-01024.json.gz" + +# Stream the file +with fs.open(f"datasets/{repo_id}/{path_in_repo}", "r", compression="gzip") as f: + print(f.readline()) # read only the first line + # {"text":"Beginners BBQ Class Taking Place in Missoula!...} +``` + +See the [HF filesystem documentation](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system) for more information. + +You can also integrate this into your own library! For example, you can quickly stream a CSV dataset using Pandas in a batched manner. +```py +from huggingface_hub import HfFileSystem +import pandas as pd + +fs = HfFileSystem() + +repo_id = "YOUR_REPO_ID" +path_in_repo = "data.csv" + +batch_size = 5 + +# Stream the file +with fs.open(f"datasets/{repo_id}/{path_in_repo}") as f: + for df in pd.read_csv(f, iterator=True, chunksize=batch_size): # read 5 lines at a time + print(len(df)) # 5 +``` + +Streaming is especially useful to read big files on Hugging Face progressively or only small portion. +For example `tarfile` can iterate on the files of TAR archives, `zipfile` can read files from ZIP archives and `pyarrow` can access row groups of Parquet files. + +>![TIP] +> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal) + +## Using cURL + +Since all files on the Hub are available via HTTP, you can stream files using `cURL`: + +```bash +>>> curl -L https://huggingface.co/datasets/fka/awesome-chatgpt-prompts/resolve/main/prompts.csv | head -n 5 +"act","prompt" +"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked with creating... +"SEO Prompt","Using WebPilot, create an outline for an article that will be 2,000 words on the ... +"Linux Terminal","I want you to act as a linux terminal. I will type commands and you will repl... +"English Translator and Improver","I want you to act as an English translator, spelling correct... +``` + +Use range requests to access a specific portion of a file: + +```bash +>>> curl -r 40-88 -L https://huggingface.co/datasets/fka/awesome-chatgpt-prompts/resolve/main/prompts.csv +Imagine you are an experienced Ethereum developer +``` + +Stream from private repositories using an [access token](https://huggingface.co/docs/hub/en/security-tokens): + + +```bash +>>> export HF_TOKEN=hf_xxx +>>> curl -H "Authorization: Bearer $HF_TOKEN" -L https://huggingface.co/... +``` + +## Streaming Parquet + +Parquet is a great format for AI datasets. It offers good compression, a columnar structure for efficient processing and projections, and multi-level metadata for fast filtering, and is suitable for datasets of all sizes. + +Parquet files are divided in row groups that are often around 100MB each. This lets data loaders and data processing frameworks stream data progressively, iterating on row groups. + +### Stream Row Groups + +Use PyArrow to stream row groups from Parquet files on Hugging Face: + +```python +import pyarrow.parquet as pq + +repo_id = "HuggingFaceFW/finewiki" +path_in_repo = "data/enwiki/000_00000.parquet" + +# Stream the Parquet file row group per row group +with pq.ParquetFile(f"hf://datasets/{repo_id}/{path_in_repo}") as pf: + for row_group_idx in range(pf.num_row_groups): + row_group_table = pf.read_row_group(row_group_idx) + df = row_group_table.to_pandas() +``` + +> ![TIP] +> PyArrow supports `hf://` paths out-of-the-box and uses `HfFileSystem` automatically + +Find more information in the [PyArrow documentation](./datasets-pyarrow). + +### Efficient random access + +Row groups are further divied into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks supports reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does: + +```rust +use std::sync::Arc; +use object_store::path::Path; +use object_store_opendal::OpendalStore; +use opendal::services::Huggingface; +use opendal::Operator; +use parquet::arrow::async_reader::ParquetObjectReader; +use parquet::arrow::ParquetRecordBatchStreamBuilder; +use futures::TryStreamExt; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let repo_id = "HuggingFaceFW/finewiki"; + let path_in_repo = Path::from("data/enwiki/000_00000.parquet"); + let offset = 0; + let limit = 10; + + let builder = Huggingface::default().repo_type("dataset").repo_id(repo_id); + let operator = Operator::new(builder)?.finish(); + let store = Arc::new(OpendalStore::new(operator)); + let reader = ParquetObjectReader::new(store, path_in_repo.clone()); + let batch_stream = + ParquetRecordBatchStreamBuilder::new(reader).await? + .with_offset(offset as usize) + .with_limit(limit as usize) + .build()?; + let results = batch_stream.try_collect::>().await?; + println!("Read {} batches", results.len()); + Ok(()) +} +``` + +> ![TIP] +> In Rust we use OpenDAL's `Huggingface` service which is equivalent to `HfFileSystem` in python + +Pass `write_page_index=True` in PyArrow to include the page index that enables efficient random access. +It notably adds "offset_index_offset" and "offset_index_length" to Parquet columns that you can see in the [Parquet metadata viewer on Hugging Face](https://huggingface.co/blog/cfahlgren1/intro-to-parquet-format). +Page indexes also speed up the [Hugging Face Dataset Viewer](https://huggingface.co/docs/dataset-viewer) and allows it to show data without row group size limit. From fd043a2e22a749165c7a919e0e0235936643f236 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Fri, 14 Nov 2025 17:29:52 +0100 Subject: [PATCH 5/8] datasets editing --- docs/hub/datasets-downloading.md | 2 +- docs/hub/datasets-editing.md | 105 ++++++++++++++++++++++++++++++- docs/hub/datasets-streaming.md | 2 +- 3 files changed, 106 insertions(+), 3 deletions(-) diff --git a/docs/hub/datasets-downloading.md b/docs/hub/datasets-downloading.md index 17989d1de..1f158ea52 100644 --- a/docs/hub/datasets-downloading.md +++ b/docs/hub/datasets-downloading.md @@ -2,7 +2,7 @@ ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below. +If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below.
diff --git a/docs/hub/datasets-editing.md b/docs/hub/datasets-editing.md index 90520aae3..98ef1581e 100644 --- a/docs/hub/datasets-editing.md +++ b/docs/hub/datasets-editing.md @@ -1,2 +1,105 @@ -# Datasets editing +# Editing datasets +The [Hub](https://huggingface.co/datasets) enables collabporative curation of community and research datasets. We encourage you to explore dataset on the Hub and contribute to dataset curation to help grow the ML community and accelerate progress for everyone. All contributions are welcome! + +Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if you don't have one yet. + +## Edit using the Hub UI + +> [!WARNING] +> This feature is only available for CSV datasets for now. + +The Hub's web-based interface allows users without any developer experience to edit a dataset. + +Open the dataset page and navigate to the dataset **Data Studio** to edit the dataset + +
+ + +
+ +Click on **Toggle edit mode** to enable dataset editing. + +
+ + +
+ +
+ + +
+ +Edit as many cells as you want and finally click **Commit** to commit your changes and leave a commit message. + + +
+ + +
+ +
+ + +
+ +## Using the `huggingface_hub` client library + +The rich features set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. + +## Integrated libraries + +If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset, editing, and pushing your changes can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. + +For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. + +
+ + +
+ +
+ + +
+ +### Only upload the new data + +Hugging Face's storage uses [Xet](https://huggingface.co/docs/hub/en/xet) which is based on deduplication, and enables in particular deduplicated uploads. +Unlike regular cloud storages, Xet doesn't require datasets to be completely reuploaded to commit changes. +Instead, it automatically detects which parts of the dataset changed and tells the client library to only upload the parts that changed. +To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face. + +Here is how it works with Pandas: + +```python +import pandas as pd + +# Load the dataset +df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv") + +# Edit the dataset +# df = df.apply(...) + +# Commit the changes +df.to_csv(f"hf://datasets/{repo_id}/data.csv") +``` + +This code first loads a dataset and then edits it. +Once the edits are done, `to_csv()` materializes the file in memory, chunks it, and asks Xet which chunks are already on Hugging Face and which chunks changed, and finally only upload the new data. + +### Optimized Parquet editing + +Therefore the amount of data to reupload depends on the edits and the file structure. + +The Parquet format is columnar and compressed at the page level (pages are around ~1MB). +We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages. + +Check out if your library supports optimized Parquet in the [supported libraries](./datasets-libraries) page. + +### Streaming + +Libraries with dataset streaming features for end-to-end streaming pipelines are recommended for big datasets. +In this case, the dataset processing runs progressively as the old data arrives and the new data is uploaded to the Hub. + +Check out if your library supports streaming in the [supported libraries](./datasets-libraries) page. diff --git a/docs/hub/datasets-streaming.md b/docs/hub/datasets-streaming.md index da6c8c8d9..3dfe53396 100644 --- a/docs/hub/datasets-streaming.md +++ b/docs/hub/datasets-streaming.md @@ -2,7 +2,7 @@ ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below. +If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below.
From adff916351d9be95d1ab9c56d4d73c79a508ef64 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Date: Mon, 17 Nov 2025 19:30:19 +0100 Subject: [PATCH 6/8] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Lucain Co-authored-by: célina Co-authored-by: Daniel van Strien Co-authored-by: Julien Chaumond --- docs/hub/datasets-downloading.md | 2 +- docs/hub/datasets-editing.md | 16 ++++++++-------- docs/hub/datasets-libraries.md | 8 ++++---- docs/hub/datasets-streaming.md | 16 +++++++++------- 4 files changed, 22 insertions(+), 20 deletions(-) diff --git a/docs/hub/datasets-downloading.md b/docs/hub/datasets-downloading.md index 1f158ea52..bd0c2b519 100644 --- a/docs/hub/datasets-downloading.md +++ b/docs/hub/datasets-downloading.md @@ -2,7 +2,7 @@ ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. +If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with `datasets` below.
diff --git a/docs/hub/datasets-editing.md b/docs/hub/datasets-editing.md index 98ef1581e..7468f275a 100644 --- a/docs/hub/datasets-editing.md +++ b/docs/hub/datasets-editing.md @@ -1,6 +1,6 @@ # Editing datasets -The [Hub](https://huggingface.co/datasets) enables collabporative curation of community and research datasets. We encourage you to explore dataset on the Hub and contribute to dataset curation to help grow the ML community and accelerate progress for everyone. All contributions are welcome! +The [Hub](https://huggingface.co/datasets) enables collaborative curation of community and research datasets. We encourage you to explore the datasets available on the Hub and contribute to their improvement to help grow the ML community and accelerate progress for everyone. All contributions are welcome! Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if you don't have one yet. @@ -9,9 +9,9 @@ Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if y > [!WARNING] > This feature is only available for CSV datasets for now. -The Hub's web-based interface allows users without any developer experience to edit a dataset. +The Hub's web interface allows users without any developer experience to edit a dataset. -Open the dataset page and navigate to the dataset **Data Studio** to edit the dataset +Open the dataset page and navigate to the **Data Studio** tab to begin editing.
@@ -45,7 +45,7 @@ Edit as many cells as you want and finally click **Commit** to commit your chang ## Using the `huggingface_hub` client library -The rich features set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. +The rich feature set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. ## Integrated libraries @@ -65,9 +65,9 @@ For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?librar ### Only upload the new data -Hugging Face's storage uses [Xet](https://huggingface.co/docs/hub/en/xet) which is based on deduplication, and enables in particular deduplicated uploads. -Unlike regular cloud storages, Xet doesn't require datasets to be completely reuploaded to commit changes. -Instead, it automatically detects which parts of the dataset changed and tells the client library to only upload the parts that changed. +Hugging Face's storage is powered by [Xet](https://huggingface.co/docs/hub/en/xet), which uses chunk deduplication to make uploads more efficient. +Unlike regular cloud storage, Xet doesn't require files to be entirely reuploaded to commit changes. +Instead, it automatically detects which parts of the dataset have changed and instructs the client library only to upload the updated parts. To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face. Here is how it works with Pandas: @@ -86,7 +86,7 @@ df.to_csv(f"hf://datasets/{repo_id}/data.csv") ``` This code first loads a dataset and then edits it. -Once the edits are done, `to_csv()` materializes the file in memory, chunks it, and asks Xet which chunks are already on Hugging Face and which chunks changed, and finally only upload the new data. +Once the edits are done, `to_csv()` materializes the file in memory, chunks it, asks Xet which chunks are already on Hugging Face and which chunks have changed, and then uploads only the new data. ### Optimized Parquet editing diff --git a/docs/hub/datasets-libraries.md b/docs/hub/datasets-libraries.md index 253da620b..7bf595640 100644 --- a/docs/hub/datasets-libraries.md +++ b/docs/hub/datasets-libraries.md @@ -32,10 +32,10 @@ _+p*: Requires passing extra arguments to write optimized Parquet files_ ### Streaming Dataset streaming allows to iterate on a dataset on Hugging Face progressively without having to download it completely. -It saves disk space and download time. +It saves disk space since the data is never materialized on disk. It saves memory since only a small portion of the dataset is used at a time. And it saves time, since there is no need to download data in advance prior to the actual CPU or GPU workload. In addition to streaming from Hugging Face, many libraries also support streaming when writing back to Hugging Face. -Therefore they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the downloads, uploads and processing steps. +Therefore, they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the download, upload, and processing steps. For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself. @@ -43,10 +43,10 @@ For more details on how to do streaming, check out the documentation of a librar Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing: -* [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage based on Git. It accelereates uploads and downloads thanks to deduplication and allows efficient file editing +* [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage backend. It accelerates uploads and downloads thanks to chunk-based deduplication and allows efficient file editing * Page index accelerates filters when streaming and enables efficient random access, e.g. in the [Dataset Viewer](https://huggingface.co/docs/dataset-viewer) -Some libraries require extra argument to write optimized Parquet files: +Some libraries require extra argument to write optimized Parquet files like `Pandas` and `PyArrow`: * `content_defined_chunking=True` to enable Parquet Content Defined Chunking, for [deduplication](https://huggingface.co/blog/parquet-cdc) and [editing](./datasets-editing) * `write_page_index=True` to include a page index in the Parquet metadata, for [streaming and random access](./datasets-streaming) diff --git a/docs/hub/datasets-streaming.md b/docs/hub/datasets-streaming.md index 3dfe53396..329168add 100644 --- a/docs/hub/datasets-streaming.md +++ b/docs/hub/datasets-streaming.md @@ -2,7 +2,7 @@ ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. +If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`knkarthick/samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with `datasets` below.
@@ -16,8 +16,10 @@ If a dataset on the Hub is tied to a [supported library](./datasets-libraries) t ## Using the Hugging Face Client Library -You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in python, run +You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in Python, simply install the library (we recommend using the latest version) and run the following code. +```bash +pip install -U huggingface_hub ```python from huggingface_hub import HfFileSystem @@ -32,7 +34,7 @@ with fs.open(f"datasets/{repo_id}/{path_in_repo}", "r", compression="gzip") as f # {"text":"Beginners BBQ Class Taking Place in Missoula!...} ``` -See the [HF filesystem documentation](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system) for more information. +See the [`HfFileSystem` documentation](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system) for more information. You can also integrate this into your own library! For example, you can quickly stream a CSV dataset using Pandas in a batched manner. ```py @@ -52,11 +54,11 @@ with fs.open(f"datasets/{repo_id}/{path_in_repo}") as f: print(len(df)) # 5 ``` -Streaming is especially useful to read big files on Hugging Face progressively or only small portion. +Streaming is especially useful to read big files on Hugging Face progressively or only a small portion. For example `tarfile` can iterate on the files of TAR archives, `zipfile` can read files from ZIP archives and `pyarrow` can access row groups of Parquet files. ->![TIP] -> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal) +> ![TIP] +> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal). ## Using cURL @@ -116,7 +118,7 @@ Find more information in the [PyArrow documentation](./datasets-pyarrow). ### Efficient random access -Row groups are further divied into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks supports reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does: +Row groups are further divided into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks support reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does: ```rust use std::sync::Arc; From babca3fb0459fc82552643f33d37c530dad42859 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Date: Wed, 19 Nov 2025 13:38:03 +0100 Subject: [PATCH 7/8] Apply suggestions from code review Co-authored-by: Caleb Fahlgren Co-authored-by: burtenshaw --- docs/hub/datasets-editing.md | 14 +++++++------- docs/hub/datasets-libraries.md | 6 +++--- docs/hub/datasets-streaming.md | 5 +++-- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/docs/hub/datasets-editing.md b/docs/hub/datasets-editing.md index 7468f275a..5df15c4c6 100644 --- a/docs/hub/datasets-editing.md +++ b/docs/hub/datasets-editing.md @@ -9,7 +9,7 @@ Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if y > [!WARNING] > This feature is only available for CSV datasets for now. -The Hub's web interface allows users without any developer experience to edit a dataset. +The Hub's web interface allows users without any technical expertise to edit a dataset. Open the dataset page and navigate to the **Data Studio** tab to begin editing. @@ -45,11 +45,11 @@ Edit as many cells as you want and finally click **Commit** to commit your chang ## Using the `huggingface_hub` client library -The rich feature set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. +The `huggingface_hub` library can manage Hub repositories including editing datasets. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset, editing, and pushing your changes can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. +If a dataset on the Hub is compatible with a [supported library](./datasets-libraries), loading, editing, and pushing the dataset takes just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. @@ -66,7 +66,7 @@ For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?librar ### Only upload the new data Hugging Face's storage is powered by [Xet](https://huggingface.co/docs/hub/en/xet), which uses chunk deduplication to make uploads more efficient. -Unlike regular cloud storage, Xet doesn't require files to be entirely reuploaded to commit changes. +Unlike traditional cloud storage, Xet doesn't require the entire dataset to be re-uploaded to commit changes. Instead, it automatically detects which parts of the dataset have changed and instructs the client library only to upload the updated parts. To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face. @@ -78,7 +78,7 @@ import pandas as pd # Load the dataset df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv") -# Edit the dataset +# Edit part of the dataset # df = df.apply(...) # Commit the changes @@ -90,7 +90,7 @@ Once the edits are done, `to_csv()` materializes the file in memory, chunks it, ### Optimized Parquet editing -Therefore the amount of data to reupload depends on the edits and the file structure. +The amount of data to upload depends on the edits and the file structure. The Parquet format is columnar and compressed at the page level (pages are around ~1MB). We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages. @@ -99,7 +99,7 @@ Check out if your library supports optimized Parquet in the [supported libraries ### Streaming -Libraries with dataset streaming features for end-to-end streaming pipelines are recommended for big datasets. +For big datasets, libraries with dataset streaming features for end-to-end streaming pipelines are recommended. In this case, the dataset processing runs progressively as the old data arrives and the new data is uploaded to the Hub. Check out if your library supports streaming in the [supported libraries](./datasets-libraries) page. diff --git a/docs/hub/datasets-libraries.md b/docs/hub/datasets-libraries.md index 7bf595640..96a9477ad 100644 --- a/docs/hub/datasets-libraries.md +++ b/docs/hub/datasets-libraries.md @@ -31,10 +31,10 @@ _+p*: Requires passing extra arguments to write optimized Parquet files_ ### Streaming -Dataset streaming allows to iterate on a dataset on Hugging Face progressively without having to download it completely. -It saves disk space since the data is never materialized on disk. It saves memory since only a small portion of the dataset is used at a time. And it saves time, since there is no need to download data in advance prior to the actual CPU or GPU workload. +Dataset streaming allows iterating on a dataset from Hugging Face progressively without having to download it completely. +It saves local disk space because the data is never on disk. It saves memory since only a small portion of the dataset is used at a time. And it saves time, since there is no need to download data before the CPU or GPU workload. -In addition to streaming from Hugging Face, many libraries also support streaming when writing back to Hugging Face. +In addition to streaming *from* Hugging Face, many libraries also support streaming *back to* Hugging Face. Therefore, they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the download, upload, and processing steps. For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself. diff --git a/docs/hub/datasets-streaming.md b/docs/hub/datasets-streaming.md index 329168add..113ad5c67 100644 --- a/docs/hub/datasets-streaming.md +++ b/docs/hub/datasets-streaming.md @@ -2,7 +2,7 @@ ## Integrated libraries -If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`knkarthick/samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with `datasets` below. +If a dataset on the Hub is compatible with a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`knkarthick/samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with `datasets` below.
@@ -20,6 +20,7 @@ You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, de ```bash pip install -U huggingface_hub +``` ```python from huggingface_hub import HfFileSystem @@ -36,7 +37,7 @@ with fs.open(f"datasets/{repo_id}/{path_in_repo}", "r", compression="gzip") as f See the [`HfFileSystem` documentation](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system) for more information. -You can also integrate this into your own library! For example, you can quickly stream a CSV dataset using Pandas in a batched manner. +You can also integrate this into your own library! For example, you can quickly stream a CSV dataset using Pandas in batches. ```py from huggingface_hub import HfFileSystem import pandas as pd From 9e513dd5b8de793c36ac525d67752811ea3ebe57 Mon Sep 17 00:00:00 2001 From: Quentin Lhoest Date: Wed, 26 Nov 2025 14:46:23 +0100 Subject: [PATCH 8/8] try multiple columns for additional libraries features --- docs/hub/datasets-libraries.md | 38 ++++++++++++++++------------------ 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/docs/hub/datasets-libraries.md b/docs/hub/datasets-libraries.md index 96a9477ad..2b1925a3f 100644 --- a/docs/hub/datasets-libraries.md +++ b/docs/hub/datasets-libraries.md @@ -8,26 +8,24 @@ We're happy to welcome to the Hub a set of Open Source libraries that are pushin The table below summarizes the supported libraries and their level of integration. -| Library | Description | Download from Hub | Push to Hub | -| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------- | -| [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ✅ | -| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ +s | ✅ +s +p | -| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ +s | ✅ +s +p* | -| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ +s | ✅ +s +p | -| [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ✅ | -| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ +s | ❌ | -| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ +s | ❌ | -| [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ +s | ❌ | -| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ +s | ✅ | -| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ +p* | -| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ +s | ✅ | -| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ +s | ✅ +p* | -| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ +s | ✅ +s +p | -| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ +s | ❌ | - -_+s: Supports Streaming_ -_+p: Writes optimized Parquet files_ -_+p*: Requires passing extra arguments to write optimized Parquet files_ +| Library | Description | Download from Hub | Stream from Hub | Push to Hub | Stream to Hub | Optimized Parquet files | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | --------------- | ----------- | ------------- | ----------------------- | +| [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ❌ | ✅ | ❌ | ❌ | +| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ | ✅ | ✅ | ✅ | ✅* | +| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ❌ | ✅ | ❌ | ❌ | +| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ | ✅ | ❌ | ❌ | ❌ | +| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ | ✅ | ❌ | ❌ | ❌ | +| [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ | ✅ | ❌ | ❌ | ❌ | +| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ | ✅ | ✅ | ❌ | ❌ | +| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ❌ | ✅ | ❌ | ✅* | +| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ | ✅ | ✅ | ❌ | ❌ | +| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ | ✅ | ✅ | ❌ | ✅* | +| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ | ✅ | ✅ | ✅ | ✅ | +| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ | ✅ | ❌ | ❌ | ❌ | + +_ * Requires passing extra arguments to write optimized Parquet files_ ### Streaming