Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/hub/datasets-connectors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Datasets connectors

2 changes: 1 addition & 1 deletion docs/hub/datasets-downloading.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Integrated libraries

If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below.
If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with `datasets` below.

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/>
Expand Down
105 changes: 105 additions & 0 deletions docs/hub/datasets-editing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Editing datasets

The [Hub](https://huggingface.co/datasets) enables collaborative curation of community and research datasets. We encourage you to explore the datasets available on the Hub and contribute to their improvement to help grow the ML community and accelerate progress for everyone. All contributions are welcome!

Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if you don't have one yet.

## Edit using the Hub UI

> [!WARNING]
> This feature is only available for CSV datasets for now.
Comment on lines +9 to +10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Will keep in mind when expanding in future


The Hub's web interface allows users without any technical expertise to edit a dataset.

Open the dataset page and navigate to the **Data Studio** tab to begin editing.

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button-min.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button_dark-min.png"/>
</div>

Click on **Toggle edit mode** to enable dataset editing.

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button-min.png"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button_dark-min.png"/>
</div>

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button-min.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button_dark-min.png"/>
</div>

Edit as many cells as you want and finally click **Commit** to commit your changes and leave a commit message.


<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button-min.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button_dark-min.png"/>
</div>

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message-min.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message_dark-min.png"/>
</div>

## Using the `huggingface_hub` client library

The `huggingface_hub` library can manage Hub repositories including editing datasets. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more.

## Integrated libraries

If a dataset on the Hub is compatible with a [supported library](./datasets-libraries), loading, editing, and pushing the dataset takes just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so.

For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below.

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-dark.png"/>
</div>

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal-dark.png"/>
</div>

### Only upload the new data

Hugging Face's storage is powered by [Xet](https://huggingface.co/docs/hub/en/xet), which uses chunk deduplication to make uploads more efficient.
Unlike traditional cloud storage, Xet doesn't require the entire dataset to be re-uploaded to commit changes.
Instead, it automatically detects which parts of the dataset have changed and instructs the client library only to upload the updated parts.
To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face.

Here is how it works with Pandas:

```python
import pandas as pd

# Load the dataset
df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv")

# Edit part of the dataset
# df = df.apply(...)

# Commit the changes
df.to_csv(f"hf://datasets/{repo_id}/data.csv")
```

This code first loads a dataset and then edits it.
Once the edits are done, `to_csv()` materializes the file in memory, chunks it, asks Xet which chunks are already on Hugging Face and which chunks have changed, and then uploads only the new data.

### Optimized Parquet editing

The amount of data to upload depends on the edits and the file structure.

The Parquet format is columnar and compressed at the page level (pages are around ~1MB).
We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a visual of how those files and/or datasets are marked on the Hub 😁


Check out if your library supports optimized Parquet in the [supported libraries](./datasets-libraries) page.

### Streaming

For big datasets, libraries with dataset streaming features for end-to-end streaming pipelines are recommended.
In this case, the dataset processing runs progressively as the old data arrives and the new data is uploaded to the Hub.

Check out if your library supports streaming in the [supported libraries](./datasets-libraries) page.
52 changes: 40 additions & 12 deletions docs/hub/datasets-libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,52 @@ The Datasets Hub has support for several libraries in the Open Source ecosystem.
Thanks to the [huggingface_hub Python library](/docs/huggingface_hub), it's easy to enable sharing your datasets on the Hub.
We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward.

## Libraries table

The table below summarizes the supported libraries and their level of integration.

| Library | Description | Download from Hub | Push to Hub |
| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------- |
| [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ✅ |
| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ | ✅ |
| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ | ✅ |
| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ | ✅ |
| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ +s | ✅ +s +p |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or other suggestion: add columns to have

| Library  | Description | Download from Hub | Stream from Hub | Push to Hub | Stream to Hub | Optimized parquet files |
| [Daft](./datasets-daft)             | Data engine for large scale, multimodal data processing with a Python-native interface.                                        |||||| 

Copy link
Member Author

@lhoestq lhoestq Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked the current approach because there is more space and proportionally more ✅ than ❌ than with more columns, but with columns dedicated to streaming I feel like we put more emphasis on streaming and optimized parquet features which is great. I'll do the change and check how it looks

before:

image

| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ +s | ✅ +s +p* |
| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ +s | ✅ +s +p |
| [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ✅ |
| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ | ✅ |
| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ | ❌ |
| [fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ | ❌ |
| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ | ✅ |
| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ |
| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ | ✅ |
| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ | ✅ |
| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ | ✅ |
| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ | ❌ |
| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ +s | ❌ |
| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ +s | ❌ |
| [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ +s | ❌ |
| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ +s | ✅ |
| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ +p* |
| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ +s | ✅ |
| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ +s | ✅ +p* |
| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ +s | ✅ +s +p |
| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ +s | ❌ |

_+s: Supports Streaming_
_+p: Writes optimized Parquet files_
_+p*: Requires passing extra arguments to write optimized Parquet files_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!


### Streaming

Dataset streaming allows iterating on a dataset from Hugging Face progressively without having to download it completely.
It saves local disk space because the data is never on disk. It saves memory since only a small portion of the dataset is used at a time. And it saves time, since there is no need to download data before the CPU or GPU workload.

In addition to streaming *from* Hugging Face, many libraries also support streaming *back to* Hugging Face.
Therefore, they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the download, upload, and processing steps.

For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself.

### Optimized Parquet files

Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a visual of how those files or datasets are marked on the Hub 😁

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hint hint @lhoestq =)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


* [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage backend. It accelerates uploads and downloads thanks to chunk-based deduplication and allows efficient file editing
* Page index accelerates filters when streaming and enables efficient random access, e.g. in the [Dataset Viewer](https://huggingface.co/docs/dataset-viewer)

Some libraries require extra argument to write optimized Parquet files like `Pandas` and `PyArrow`:

* `content_defined_chunking=True` to enable Parquet Content Defined Chunking, for [deduplication](https://huggingface.co/blog/parquet-cdc) and [editing](./datasets-editing)
* `write_page_index=True` to include a page index in the Parquet metadata, for [streaming and random access](./datasets-streaming)

## Integrating data libraries and tools with the Hub

Expand Down
Loading