-
Notifications
You must be signed in to change notification settings - Fork 76
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1 from bhavnicksm/development
v0.0.1a4
- Loading branch information
Showing
20 changed files
with
1,895 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
__pycache__/ | ||
chonkie.egg-info/ | ||
.DS_Store |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
# 🦛 Chonkie Docs | ||
|
||
> UGH, do I _need_ to explain how to use Chonkie? Man, that's a bummer... To be honest, Chonkie is very easy, with little documentation necessary, but just in case, I'll include some here. | ||
# Table Of Contents | ||
|
||
- [🦛 Chonkie Docs](#-chonkie-docs) | ||
- [Table Of Contents](#table-of-contents) | ||
- [Chunkers](#chunkers) | ||
- [TokenChunker](#tokenchunker) | ||
- [Initialization](#initialization) | ||
- [Methods](#methods) | ||
- [Example](#example) | ||
|
||
# Chunkers | ||
## TokenChunker | ||
|
||
The `TokenChunker` class is designed to split text into overlapping chunks based on a specified token size. This is particularly useful for applications that need to process text in manageable pieces while maintaining some context between chunks. | ||
|
||
### Initialization | ||
|
||
To initialize a `TokenChunker`, you need to provide a tokenizer, the maximum number of tokens per chunk, and the number of tokens to overlap between chunks. | ||
|
||
```python | ||
from tokenizers import Tokenizer | ||
from chonkie.chunker import TokenChunker | ||
|
||
# Initialize the tokenizer (example using GPT-2 tokenizer) | ||
tokenizer = Tokenizer.from_pretrained("gpt2") | ||
|
||
# Initialize the TokenChunker | ||
chunker = TokenChunker(tokenizer=tokenizer, chunk_size=512, chunk_overlap=128) | ||
``` | ||
|
||
### Methods | ||
|
||
`chunk(text: str) -> List[Chunk]` | ||
This method splits the input text into overlapping chunks of the specified token size. | ||
|
||
`Args:` | ||
* `text` (str): The input text to be chunked. | ||
`Returns:` | ||
* `List[Chunk]`: A list of Chunk objects containing the chunked text and metadata. | ||
|
||
### Example | ||
|
||
```python | ||
text = "Your input text here." | ||
chunks = chunker.chunk(text) | ||
|
||
for chunk in chunks: | ||
print(f"Chunk text: {chunk.text}") | ||
print(f"Start index: {chunk.start_index}") | ||
print(f"End index: {chunk.end_index}") | ||
print(f"Token count: {chunk.token_count}") | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,80 @@ | ||
# chonkie | ||
![Chonkie Logo](assets/chonkie_logo_br_transparent_bg.png) | ||
# 🦛 Chonkie | ||
|
||
so i found myself making another RAG bot (for the 2342148th time) and meanwhile, explaining to my juniors about why we should use chunking in our RAG bots, only to realise that i would have to write chunking all over again unless i use the bloated software library X or the extremely feature-less library Y. _WHY CAN I NOT HAVE GOOD THINGS IN LIFE, UGH?_ | ||
|
||
Can't i just install, import and run chunking and not have to worry about dependencies, bloat, speed or other factors? | ||
|
||
Well, with chonkie you can! (chonkie boi is a gud boi) | ||
|
||
✅ All the CHONKs you'd ever need </br> | ||
✅ Easy to use: Install, Import, CHONK </br> | ||
✅ No bloat, just CHONK </br> | ||
✅ Cute CHONK mascoot </br> | ||
✅ Moto Moto's favorite python library </br> | ||
|
||
What're you waiting for, **just CHONK it**! | ||
|
||
# Table of Contents | ||
- [🦛 Chonkie](#-chonkie) | ||
- [Table of Contents](#table-of-contents) | ||
- [Why do we need Chunking?](#why-do-we-need-chunking) | ||
- [Approaches to doing chunking](#approaches-to-doing-chunking) | ||
- [Installation](#installation) | ||
- [Usage](#usage) | ||
- [Citation](#citation) | ||
|
||
# Why do we need Chunking? | ||
|
||
Here are some arguments for why one would like to chunk their texts for a RAG scenario: | ||
|
||
- Most RAG pipelines are bottlenecked by context length as of today. While we expect future LLMs to exceed 1Mill token lenghts, even then, it's not only LLMs inside the pipeline, but other aspects too, namely, bi-encoder retriever, cross-encoder reranker and even models for particular aspects like answer relevancy models and answer attribution models, that could lead to the context length bottleneck. | ||
- Even with infinite context, there's no free lunch on the context side - the minimum it takes to understand a string is o(n) and we would never be able to make models more efficient on scaling context. So, if we have smaller context, our search and generation pipeline would be more efficient (in response latency) | ||
- Research suggests that a lot of random, noisy context can actually lead to higher hallucination in the model responses. However, if we ensure that each chunk that get's passed onto the model is only relevant, the model would end up with better responses. | ||
|
||
# Approaches to doing chunking | ||
|
||
1. Token Chunking (a.k.a Fixed Size Chunking or Sliding Window Chunking) | ||
2. Word Chunking | ||
3. Sentence Chunking | ||
4. Semantic Chunking | ||
5. Semantic Double-Pass Merge (SDPM) Chunking | ||
6. Context-aware Chunking | ||
|
||
# Installation | ||
|
||
To install chonkie, simply run: | ||
|
||
```bash | ||
pip install chonkie | ||
``` | ||
|
||
# Usage | ||
|
||
Here's a basic example to get you started: | ||
|
||
```python | ||
from chonkie import TokenChunker | ||
|
||
# Initialize the chunker | ||
chunker = TokenChunker() | ||
|
||
# Chunk some text | ||
chunks = chunker.chunk("Your text here") | ||
print(chunks) | ||
``` | ||
|
||
# Citation | ||
|
||
If you use Chonkie in your research, please cite it as follows: | ||
|
||
``` | ||
@misc{chonkie2024, | ||
author = {Minhas, Bhavnick}, | ||
title = {Chonkie: A Lightweight Chunking Library for RAG Bots}, | ||
year = {2024}, | ||
publisher = {GitHub}, | ||
journal = {GitHub repository}, | ||
howpublished = {\url{https://github.com/bhavnick/chonkie}}, | ||
} | ||
``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
from .chunker import ( | ||
Sentence, | ||
SemanticSentence, | ||
Chunk, | ||
SentenceChunk, | ||
SemanticChunk, | ||
BaseChunker, | ||
TokenChunker, | ||
WordChunker, | ||
SentenceChunker, | ||
SemanticChunker, | ||
SPDMChunker, | ||
|
||
) | ||
|
||
__version__ = "0.0.1a4" | ||
__name__ = "chonkie" | ||
__author__ = "Bhavnick Minhas" | ||
|
||
__all__ = [ | ||
"__name__", | ||
"__version__", | ||
"__author__", | ||
"Sentence", | ||
"SemanticSentence", | ||
"Chunk", | ||
"SentenceChunk", | ||
"SemanticChunk", | ||
"BaseChunker", | ||
"WordChunker", | ||
"TokenChunker", | ||
"SentenceChunker", | ||
"SemanticChunker", | ||
"SPDMChunker", | ||
] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
from .base import Chunk, BaseChunker | ||
from .token import TokenChunker | ||
from .word import WordChunker | ||
from .sentence import Sentence, SentenceChunk, SentenceChunker | ||
from .semantic import SemanticSentence, SemanticChunk, SemanticChunker | ||
from .spdm import SPDMChunker | ||
|
||
__all__ = [ | ||
"BaseChunker", | ||
"Chunk", | ||
"TokenChunker", | ||
"WordChunker", | ||
"Sentence", | ||
"SentenceChunk", | ||
"SentenceChunker", | ||
"SemanticSentence", | ||
"SemanticChunk", | ||
"SemanticChunker", | ||
"SPDMChunker", | ||
] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
from typing import List | ||
from dataclasses import dataclass | ||
from abc import ABC, abstractmethod | ||
|
||
@dataclass | ||
class Chunk: | ||
"""Dataclass representing a text chunk with metadata.""" | ||
text: str | ||
start_index: int | ||
end_index: int | ||
token_count: int | ||
|
||
class BaseChunker(ABC): | ||
"""Abstract base class for all chunker implementations. | ||
All chunker implementations should inherit from this class and implement | ||
the chunk() method according to their specific chunking strategy. | ||
""" | ||
|
||
@abstractmethod | ||
def chunk(self, text: str) -> List[Chunk]: | ||
"""Split text into chunks according to the implementation strategy. | ||
Args: | ||
text: Input text to be chunked | ||
Returns: | ||
List of Chunk objects containing the chunked text and metadata | ||
""" | ||
pass | ||
|
||
def __call__(self, text: str) -> List[Chunk]: | ||
"""Make the chunker callable directly. | ||
Args: | ||
text: Input text to be chunked | ||
Returns: | ||
List of Chunk objects containing the chunked text and metadata | ||
""" | ||
return self.chunk(text) | ||
|
||
def __repr__(self) -> str: | ||
"""Return string representation of the chunker.""" | ||
return f"{self.__class__.__name__}()" |
Oops, something went wrong.