A simple IPA tokeniser, as simple as in:
>>> from ipatok import tokenise
>>> tokenise('ˈtiːt͡ʃə')
['t', 'iː', 't͡ʃ', 'ə']
>>> tokenise('ʃːjeq͡χːʼjer')
['ʃː', 'j', 'e', 'q͡χːʼ', 'j', 'e', 'r']
tokenise(string, strict=False, replace=False, diphthongs=False, tones=False,
unknown=False, merge=None)
takes an IPA string and returns a list of tokens.
A token usually consists of a single letter together with its accompanying
diacritics. If two letters are connected by a tie bar, they are also considered
a single token. Except for length markers, suprasegmentals are excluded from
the output. Whitespace is also ignored. The function accepts the following
keyword arguments:
strict
: if set toTrue
, the function ensures thatstring
complies to the IPA spec (the 2015 revision); aValueError
is raised if it does not. If set toFalse
(the default), the role of non-IPA characters is guessed based on their Unicode category (cf. the pitfalls section below).replace
: if set toTrue
, the function replaces some common substitutes with their IPA-compliant counterparts, e.g.g → ɡ
,ɫ → l̴
,ʦ → t͡s
. Refer toipatok/data/replacements.tsv
for a full list. If bothstrict
andreplace
are set toTrue
, replacing is done before checking for spec compliance.diphthongs
: if set toTrue
, the function groups together non-syllabic vowels with their syllabic neighbours (e.g.aɪ̯
would form a single token). If set toFalse
(the default), vowels are not tokenised together unless there is a connecting tie bar (e.g.a͡ɪ
).tones
: if set toTrue
, tone and word accents are included in the output (accent markers as diacritics and Chao letters as separate tokens). If set toFalse
(the default), these are ignored.unknown
: if set toTrue
, the output includes (as separate tokens) symbols that cannot be classified as letters, diacritics or suprasegmentals (e.g.-
,/
,$
). If set toFalse
(the default), such symbols are ignored. It does not have effect ifstrict
is set toTrue
.merge
: expects astr, str → bool
function to be applied onto each pair of consecutive tokens; those for which the output isTrue
are merged together. You can use this to, e.g., plug in your own diphthong detection algorithm:>>> tokenise(string, diphthongs=False, merge=custom_func)
tokenize
is an alias for tokenise
.
replace_digits_with_chao(string, inverse=False)
takes an IPA string and
replaces the digits 1-5 (also in superscript) with Chao tone letters. If
inverse=True
, smaller digits are converted into higher tones; otherwise,
they are converted into lower tones (the default). Equal consecutive digits
are collapsed into a single Chao letter (e.g. 55 → ˥
).
>>> tokenise(replace_numbers_with_chao('ɕia⁵¹ɕyɛ²¹⁴'), tones=True)
['ɕ', 'i', 'a', '˥˩', 'ɕ', 'y', 'ɛ', '˨˩˦']
clusterise(string, strict=False, replace=False, diphthongs=False,
tones=False, unknown=False, merge=None)
takes an IPA string and lists its
consonant and vowel clusters. The keyword arguments are identical as for
tokenise
:
>>> from ipatok import clusterise
>>> clusterise("kiaːltaːʃ")
['k', 'iaː', 'lt', 'aː', 'ʃ']
clusterize
is an alias for clusterise
.
When strict=True
each symbol is looked up in the spec and there is no
ambiguity as to how the input should be tokenised.
With strict=False
IPA symbols are still handled correctly. A non-IPA symbol
would be treated as follows:
- if it is a non-modifier letter (e.g.
Γ
), it is considered a consonant; - if it is a modifier (e.g.
ˀ
) or a combining mark (e.g.ə̇
), it is considered a diacritic; - if it is a modifier tone letter (e.g.
꜍
), it is considered a tone symbol; - if it is neither of those, it is considered an unknown symbol.
Regardless of the value of strict
, whitespace characters and underscores
are considered to be word boundaries, i.e. there would not be tokens grouping
together symbols separated by these characters, even though the latter are not
included in the output.
This is a Python 3 package without dependencies and it is offered at the Cheese Shop:
# usually within a virtual environment pip install ipatok
- lingpy is a historical linguistics suite that includes an ipa2tokens function.
- loanpy is another historical linguistics suite which works with IPA strings.
- ipapy is a package for working with IPA strings.
- ipalint provides a command-line tool for checking IPA datasets for errors and inconsistencies.
- asjp provides functions for converting between IPA and ASJP.
MIT. Do as you please and praise the snake gods.