This is a repository for japanese tokenizer with HuggingFace library.
You can use JapaneseTransformerTokenizer
like transformers.BertJapaneseTokenizer
.
issue は日本語でも大丈夫です。
Documentations are available on readthedoc.
pip install jptranstokenizer
This is the example to use jptranstokenizer.JapaneseTransformerTokenizer
with sentencepiece model of nlp-waseda/roberta-base-japanese and Juman++.
Before the following steps, you need to install pyknp and Juman++.
>>> from jptranstokenizer import JapaneseTransformerTokenizer
>>> tokenizer = JapaneseTransformerTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
>>> tokens = tokenizer.tokenize("外国人参政権")
# tokens: ['▁外国', '▁人', '▁参政', '▁権']
Note that different dependencies are required depending on the type of tokenizer you use.
See also Quickstart on Read the Docs
There will be another paper. Be sure to check here again when you cite.
@inproceedings{Suzuki-2023-nlp,
jtitle = {{異なる単語分割システムによる日本語事前学習言語モデルの性能評価}},
title = {{Performance Evaluation of Japanese Pre-trained Language Models with Different Word Segmentation Systems}},
jauthor = {鈴木, 雅弘 and 坂地, 泰紀 and 和泉, 潔},
author = {Suzuki, Masahiro and Sakaji, Hiroki and Izumi, Kiyoshi},
jbooktitle = {言語処理学会 第29回年次大会 (NLP2023)},
booktitle = {29th Annual Meeting of the Association for Natural Language Processing (NLP)},
year = {2023},
pages = {894-898}
}
- Pretrained Japanese BERT models (containing Japanese tokenizer)
- Autor NLP Lab. in Tohoku University
- https://github.com/cl-tohoku/bert-japanese
- SudachiTra
- Author Works Applications
- https://github.com/WorksApplications/SudachiTra
- UD_Japanese-GSD
- Author megagonlabs
- https://github.com/megagonlabs/UD_Japanese-GSD
- Juman++
- Author Kurohashi Lab. in University of Kyoto
- https://github.com/ku-nlp/jumanpp