This package tokenizes (splits) words, sentences and graphemes, based on Unicode text segmentation (UAX #29), for Unicode version 15.0.0. Details and usage are in the respective packages:
Any time our code operates on individual words, we are tokenizing. Often, we do it ad hoc, such as splitting on spaces, which gives inconsistent results. The Unicode standard is better: it is multi-lingual, and handles punctuation, special characters, etc.
The uax29 module has 4 tokenizers. In decreasing granularity: sentences → phrases → words → graphemes. Words is the most common use.
You might use this for inverted indexes, full-text search, TF-IDF, BM25, embeddings, etc. Anything that needs word boundaries.
If you're doing embeddings, the definition of “meaningful unit” will depend on your application. You might choose sentences, phrases, words, or a combination.
We use the official Unicode test suites. Status:
go get "github.com/clipperhouse/uax29/words"
import "github.com/clipperhouse/uax29/words"
text := []byte("Hello, 世界. Nice dog! 👍🐶")
segments := words.NewSegmenter(text) // A segmenter is an iterator over the words
for segments.Next() { // Next() returns true until end of data or error
fmt.Printf("%q\n", segments.Bytes()) // Do something with the current token
}
if segments.Err() != nil { // Check the error
log.Fatal(segments.Err())
}
jargon, a text pipelines package for CLI and Go, which consumes this package.
C# (also by me)