Skip to content

Latest commit

 

History

History
26 lines (22 loc) · 1.13 KB

README.md

File metadata and controls

26 lines (22 loc) · 1.13 KB

bert-chunker: efficient and trained chunking for unstructured documents

Model | Paper

bert-chunker is a text chunker based on BERT with a classifier head to predict the start token of chunks (for use in RAG, etc), and with a sliding window it cut documents of any size into chunks. It is finetuned based on nreimers/MiniLM-L6-H384-uncased, and the whole training lasted for 10 minutes on a Nvidia P40 GPU on a 50 MB synthetized dataset. This repo includes codes for model defining, generating dataset, training and testing.

Generate dataset

See generate_dataset.ipynb

Train from the base model all-MiniLM-L6-v2

Run

bash train.sh

Inference

See test.py

Citation

If this work is helpful, please kindly cite as:

@article{BertChunker,
  title={BertChunker: Efficient and Trained Chunking for Unstructured Documents}, 
  author={Yannan Luo},
  year={2024},
  url={https://github.com/jackfsuia/BertChunker}
}