The tokenizer tokenizes a text into sentences and words.
This software is part of a larger collection of natural language processing tools known as "the OpeNER project". You can find more information about the project at the OpeNER portal. There you can also find references to terms like KAF (an XML standard to represent linguistic annotations in texts), component, cores, scenario's and pipelines.
Installing the tokenizer can be done by executing:
gem install tokenizer
Please bare in mind that all components in OpeNER take KAF as an input and output KAF by default.
You should now be able to call the tokenizer as a regular shell command: by its name. Once installed the gem normally sits in your path so you can call it directly from anywhere.
Tokenizing some text:
echo "This is English text" | tokenizer -l en --no-kaf
Will result in
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<KAF version="v1.opener" xml:lang="en">
<kafHeader>
<linguisticProcessors layer="text">
<lp name="opener-sentence-splitter-en" timestamp="2013-05-31T11:39:31Z" version="0.0.1"/>
<lp name="opener-tokenizer-en" timestamp="2013-05-31T11:39:32Z" version="1.0.1"/>
</linguisticProcessors>
</kafHeader>
<text>
<wf length="4" offset="0" para="1" sent="1" wid="w1">This</wf>
<wf length="2" offset="5" para="1" sent="1" wid="w2">is</wf>
<wf length="7" offset="8" para="1" sent="1" wid="w3">English</wf>
<wf length="4" offset="16" para="1" sent="1" wid="w4">text</wf>
</text>
</KAF>
The available languages for tokenization are: English (en), German (de), Dutch (nl), French (fr), Spanish (es), Italian (it)
The tokenizer is capable of taking KAF as input, and actually does so by default. You can do so like this:
echo "<?xml version='1.0' encoding='UTF-8' standalone='no'?><KAF version='v1.opener' xml:lang='en'><raw>This is what I call, a test!</raw></KAF>" | tokenizer
Will result in
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<KAF version="v1.opener" xml:lang="en">
<kafHeader>
<linguisticProcessors layer="text">
<lp name="opener-sentence-splitter-en" timestamp="2013-05-31T11:39:31Z" version="0.0.1"/>
<lp name="opener-tokenizer-en" timestamp="2013-05-31T11:39:32Z" version="1.0.1"/>
</linguisticProcessors>
</kafHeader>
<text>
<wf length="4" offset="0" para="1" sent="1" wid="w1">this</wf>
<wf length="2" offset="5" para="1" sent="1" wid="w2">is</wf>
<wf length="2" offset="8" para="1" sent="1" wid="w3">an</wf>
<wf length="7" offset="11" para="1" sent="1" wid="w4">english</wf>
<wf length="4" offset="19" para="1" sent="1" wid="w5">text</wf>
</text>
</KAF>
If the argument -k (--kaf) is passed, then the argument -l (--language) is ignored.
You can launch a language identification webservice by executing:
tokenizer-server
This will launch a mini webserver with the webservice. It defaults to port 9292, so you can access it at http://localhost:9292.
To launch it on a different port provide the -p [port-number]
option like this:
tokenizer-server -p 1234
It then launches at http://localhost:1234
Documentation on the Webservice is provided by surfing to the urls provided
above. For more information on how to launch a webservice run the command with
the --help
option.
Last but not least the tokenizer comes shipped with a daemon that can read jobs (and write) jobs to and from Amazon SQS queues. For more information type:
tokenizer-daemon --help
This component runs best if you run it in an environment suited for OpeNER components. You can find an installation guide and helper tools in the OpeNER installer and an installation guide on the Opener Website.
At least you need the following system setup:
- Perl 5
- MRI 1.9.3
- Maven (for building the Gem)
The tokenizer module is a wrapping around a Perl script, which performs the actual tokenization based on rules (when to break a character sequence). The tokenizer already supports a lot of languages. Have a look to the core script to figure out how to extend to new languages.
The component is a fat wrapper around the actual language technology core. The core is a rule based tokenizer implemented in Perl. You can find the core technologies in the following repositories:
If you encounter problems, please email support@opener-project.eu or leave an issue in the issue tracker.
- Fork it ( http://github.com/opener-project/tokenizer/fork )
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request