-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Q&A: Indic - length of the compressed codes #654
Comments
Devanagari script has a large set of ligature forms forms for consonant conjuncts. These are combinations of Consonant + Viraama + Consonant (CVC) or CVCVC or even rarer CVCVCVC. Currently the generated unicharset uses the combination of the conjunct ligatures followed by vowel maatraas as well as vowel modifiers as a recognition unit, leading to unicharset of 5000+ lines. You may want to consider recognizing the conjunct cluster as a unit and vowel maatras and vowel modifiers separately. A special case can be the i maatraa that comes before (to the left of) the consonant for Devanagari. For a listing of orthographic syllables by frequency for Sanskrit, please see For a list of ligature sets for Hindi, please see |
Font Comparison Samples Attested Hindi Ligatures
|
The LSTM recognizer is currently trained to recognize the sequence of unicodes for Indic languages. This reduces the size of the output softmax of the network from the 5000+ elements in the unicharset to ~140. (There is an analogous process for Chinese, Japanese, and Korean, that doesn't use the unicode encoding, but it is a similar idea, and the codes are strictly limited in length.) The consequence of this recoding is that it runs a lot faster, but it has to learn to output a long sequence for each grapheme/syllable. I'm running a new training experiment this weekend to try a new coding scheme, in which It will take a couple of weeks to tell if it works, but if it does I will check in the code, and upload new traineddatas, and close this issue. If it doesn't work, I will have to think again... |
Ray,
Thank you for explaining regrading unicharset compression and your new
strategy for Indic graphemes.
Since the unicharset is being used as a filter, it will be important to
include the most common conjunct clusters in it, which may differ from
language to language.
Some more questions
Are the desired_characters and forbidden_characters used in the process of
creating the text corpus for different languages?
How many text lines are you using for training of Devanagari, e.g.
Sanskrit, Hindi, Marathi etc. Is it all/only from Wikipedia?
- excuse the brevity, sent from mobile
…On 21-Jan-2017 3:34 AM, "theraysmith" ***@***.***> wrote:
The LSTM recognizer is currently trained to recognize the sequence of
*unicodes* for Indic languages. This reduces the size of the output
softmax of the network from the 5000+ elements in the unicharset to ~140.
(There is an analogous process for Chinese, Japanese, and Korean, that
doesn't use the unicode encoding, but it is a similar idea, and the codes
are strictly limited in length.)
The unicharset is used as a *filter* in the beam search to allow only
sensible grapheme/syllable combinations of unicodes, so it doesn't output
complete garbage text.
The consequence of this recoding is that it runs a lot faster, but it has
to learn to output a long sequence for each grapheme/syllable.
The recoding system that maps from unicharset elements to the sequence of
unicodes currently only allows a maximum of 9 unicodes per
grapheme/syllable, including any viramas.
I'm running a new training experiment this weekend to try a new coding
scheme, in which pairs are mapped to a single code, allowing a long CVCVCVC
string to be encoded using just CCCC, cutting down from 7 codes to 4. This
will probably increase the size of the output softmax to ~170, but reduce
the length of the average code sequence by about 1/3, which might be easier
for it to learn, without slowing it down much.
It will take a couple of weeks to tell if it works, but if it does I will
check in the code, and upload new traineddatas, and close this issue. If it
doesn't work, I will have to think again...
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#654 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE2_o-xusyCIFbh-wE4T4cp4mVb4oBWWks5rUS9vgaJpZM4LhbNY>
.
|
The text corpus is from *all* the www, taken several years ago, plus more
recent data from wiki-something.
The text is divided by language automatically, so there is a separate
stream for each of the Devanagari-based languages (as there is for the
Latin-based languages) and clipped to 1GB for each language.
For each language, the text is frequency counted and cleaned by multiple
methods, and sometimes this cleaning is too stringent automatically, or not
stringent enough, so forbidden_characters and desired_characters are used
as a guide in the cleanup process. There are other lang-specific numbers
like a 1-in-n discard ratio for the frequency.
For some languages, the amount of data produced at the end is very thin.
The unicharset is extracted from what remains, and the wordlist that is
published in langdata.
For the LSTM training, I resorted to using Google's parallel infrastructure
to render enough text in all the languages.
However much or little corpus text there is, the rendering process makes
50000 chunks of 50 words to render in a different combination of font and
random degradation, which results in 400000-800000 rendered textlines.
The words are chosen to approximately echo the real frequency of conjunct
clusters (characters in most languages) in the source text, while also
using the most frequent words.
This process is all done without significant manual intervention, but
counts of the number of generated textlines indicates when it has gone
badly, usually due to a lack of fonts, or a lack of corpus text.
I recently stopped training chr, iku, khm, mya after discovering that I
have no rendered textlines that contain anything other than digits and
punctuation.
Community input is therefore extremely useful, and usually results in edits
to forbidden_characters and desired_characters, which in turn guides the
filtration process.
Community-provided corpus text would be useful for languages that have very
little or no training data, given appropriate copyright/licensing clearance.
The languages with very little corpus text are:
bih
chr
dzo
iku
snd
syr
tgk
tir
so these are likely to have poor recognition accuracy.
On Sat, Jan 21, 2017 at 7:46 AM, Shreeshrii <notifications@github.com>
wrote:
… Ray,
Thank you for explaining regrading unicharset compression and your new
strategy for Indic graphemes.
Since the unicharset is being used as a filter, it will be important to
include the most common conjunct clusters in it, which may differ from
language to language.
Some more questions
Are the desired_characters and forbidden_characters used in the process of
creating the text corpus for different languages?
How many text lines are you using for training of Devanagari, e.g.
Sanskrit, Hindi, Marathi etc. Is it all/only from Wikipedia?
- excuse the brevity, sent from mobile
On 21-Jan-2017 3:34 AM, "theraysmith" ***@***.***> wrote:
> The LSTM recognizer is currently trained to recognize the sequence of
> *unicodes* for Indic languages. This reduces the size of the output
> softmax of the network from the 5000+ elements in the unicharset to ~140.
> (There is an analogous process for Chinese, Japanese, and Korean, that
> doesn't use the unicode encoding, but it is a similar idea, and the codes
> are strictly limited in length.)
> The unicharset is used as a *filter* in the beam search to allow only
> sensible grapheme/syllable combinations of unicodes, so it doesn't output
> complete garbage text.
>
> The consequence of this recoding is that it runs a lot faster, but it has
> to learn to output a long sequence for each grapheme/syllable.
> The recoding system that maps from unicharset elements to the sequence of
> unicodes currently only allows a maximum of 9 unicodes per
> grapheme/syllable, including any viramas.
>
> I'm running a new training experiment this weekend to try a new coding
> scheme, in which pairs are mapped to a single code, allowing a long
CVCVCVC
> string to be encoded using just CCCC, cutting down from 7 codes to 4.
This
> will probably increase the size of the output softmax to ~170, but reduce
> the length of the average code sequence by about 1/3, which might be
easier
> for it to learn, without slowing it down much.
>
> It will take a couple of weeks to tell if it works, but if it does I will
> check in the code, and upload new traineddatas, and close this issue. If
it
> doesn't work, I will have to think again...
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#654#
issuecomment-274192153>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AE2_o-xusyCIFbh-
wE4T4cp4mVb4oBWWks5rUS9vgaJpZM4LhbNY>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#654 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AL056XOUmyQKlAM4aHUJc-jTRmhEwWOxks5rUihVgaJpZM4LhbNY>
.
--
Ray.
|
Ray, Thank you for the info on corpus building. I have added links for resources for bih and snd in the langdata repo just now. Please see
I also added a link to this discussion at #622 for support regarding Khmer. I will forward your post in the tesseract-ocr group for reach other community members too. |
I tried creating training data for khmer and was able to create box/tiff pairs with khmer text. It is possible that the fonts directory you used did not have khmer fonts or for some reason 'latin' fonts were used instead of khmer fonts. I will post the files separately under an issue in langdata. I used --find_fonts function of text2image to find the fonts that covered 70℅ of the khmer training text. It maybe useful in the training process to check the given font list for coverage and give an error or warning if it falls below a certain threshold, before going ahead with building the box tiff pairs. edit: --oem 0 works with the khm.traineddata, --oem 1 recognizes it incorrectly. |
Commands similar to above can be used for getting a fontlist that can be plugged into language-specific.sh to ensure that it calls fonts that are available on the system and have adequate coverage. Here is the output file from the above on my system.
|
Update: after going back to the www to get fresh data, I believe that my corpus text is now good for: |
Ray: Regarding Myanamar, please see discussion on tesseract-ocr/langdata#13
http://crubadan.org/languages/my lists three primary sources for Myanmar/Burmese. One is the myanmar wikipedia, the other two are: http://www.unicode.org/udhr/d/udhr_mya.html Also see: tesseract-ocr/langdata#46
You may also find the charts at http://www.virtualvinodh.com/wp/character-matrix/ useful for a comparison of various Indic scripts. please see rows for Burmese for Mynamar. |
@theraysmith You wrote in january:
In recent trainings, I still see large unicharsets (eg, with ALL akshara combinations from the training_text in Devanagari).
Depending on the training text, this number can go as high as 6-7000. I thought the intention was to reduce this number. Also, when training with hin.lstm as the starting point for replace top layer training, while the original .lstm file is about 8 MB, the intermediate .lstm files are about 80 MB and the _checkpoint file is about 160MB. Is this to be expected or is something wrong with the training process? |
@theraysmith Did the above approach work? In https://github.com/tesseract-ocr/docs/blob/master/das_tutorial2016/7Building%20a%20Multi-Lingual%20OCR%20Engine.pdf, you have desscribed what's a character in Devanagari and used the following example: rdvika - र्द्विक - 0930 094D 0926 094D 0935 093F 0915 I would actually split the above as two aksharas, each ending in the either the implicit a or a maatraa or a combining mark. So the above would be: rdvi - र्द्वि - 0930 094D 0926 094D 0935 093F To reduce the various akshara combinations, i would suggest splitting eg. possible combinations with ka (and these do not include the combining vedic accents!!)
Imagine these for every consonant cluster with every vowel sign (or matra), other signs like candrabindu, anusvara and visarga to each combination. the number of combinations will be HUGE. By splitting consonant cluster part separately from maatraa and other signs combination, the number of combinations can be cut down drastically.
and consonants and consonant clusters such as
and (with reph)
|
Please see pages 48-75 of http://www.sanskritweb.net/itrans/itmanual2003.pdf |
Hey Ray, |
Hey Ray, I am confused with data prepared for tesseract 4.0 training? Could you please explain it and explain the training process of tesseract-ocr with LSTM? |
@Shreeshrii Ray have said
How can I render my text at |
It is default option in text2image.
|
Hello guys.
|
Hello! Is here any chance to get rendered ground truth data for English (eng.traineddata)? AFAIK it was trained on huge set of font and some of them are not freely accessible. Not sure I'm able to render it locally. Thank you! |
#648 (comment)
@theraysmith Can you explain a little more about this?
The text was updated successfully, but these errors were encountered: