Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terrible OCR results with Channel 5 (UK) #929

Closed
cfsmp3 opened this issue Feb 13, 2018 · 32 comments
Closed

Terrible OCR results with Channel 5 (UK) #929

cfsmp3 opened this issue Feb 13, 2018 · 32 comments

Comments

@cfsmp3
Copy link
Contributor

cfsmp3 commented Feb 13, 2018

(current master, pre 0.87)

This file (but well, all of channel 5)
https://drive.google.com/open?id=1Etq-pv5G3jGqVhhRl7cNrfuw4gaKkLoV

Produces terrible results in the OCR, even though the bitmaps seem normal. What's going on?

833
00:59:26,021 --> 00:59:29,340
(CriminaLs den't.just. ’cemmit
I one type of offence. I

834
00:59:29,341 --> 00:59:31,700
Just C05 they SitQLe same petrQL
- that day‘ dQeSn't mean -

835
00:59:31,701 --> 00:59:34,740
m
pietes Of pLLa nt,
@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Feb 13, 2018

GSOC qualification: 5 points

@harrynull
Copy link
Contributor

The problem is in quantize_map(alpha, palette, rect->data[0], size, 3, rect->nb_colors); (ocr.c:773).
Comment this line out can solve the problem.

With quantize_map:
rawdo1

Without quantize_map:
raw

By the way, for reference of my own and maybe someone who wants to look into OCR, I suggest to add save_spupng("debug.png", indata, w, h, palette, alpha, 16); somewhere at the beginning of the ocr_bitmap so that it's easier to know what the raw image looks like without any further process. I don't want to open a PR to add just a line for debugging though.

@Abhinav95
Copy link
Contributor

This is pretty interesting. The quantize_map() function in itself was important (from my discussions with @anshul1912 ) in order to improve the DVB results. What the function does essentially is to kind of 'binarize' the input image into text and non text regions and ignore the gradient grayscale values at the edge of the text and non text regions. With these particular set of subtitles, it seems like the binarization process is leading to some unwanted noisy artifacts around the text regions, which is throwing off the OCR results. This could probably be solved by an additional filtering step to remove the 'salt noise' present in the current images.

@thealphadollar
Copy link
Contributor

thealphadollar commented Feb 13, 2018

@Abhinav95 I think I can look in this issue (with your help) if you don't mind :)

@Abhinav95
Copy link
Contributor

@thealphadollar Go right ahead :)

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Feb 14, 2018

Things we could do:

  1. Make quantize_map() optional
  2. Replace our function with this library (or other): https://pngquant.org/lib/

@thealphadollar
Copy link
Contributor

@cfsmp3 For now, I'll be trying to incorporate the mentioned library. Let's hope something good turns up :)

@thealphadollar
Copy link
Contributor

TL;DR: I do not think it would be a wise decision to use libimagequant library. I also tried with a library called exoquant but it doesn't seem to be compatible with libpng decoding method of PNG files. I will search a little more and see if I can find some better library, otherwise resort to making quantize_map() optional.

Using libimagequant makes the process highly inefficient as can be seen in the below screenshot. This is an implementation of the processes involved but they were not entirely implemented into the OCR system. Nevertheless, full implementation has two issues which are elaborated after the screenshot.

screenshot from 2018-02-15 20-07-36

  1. The time will only increase once full implementation is done and hence this method should be avoided. Below is the partially implemented code (without using the result image, just the new palette).

screenshot from 2018-02-15 20-40-01

  1. I did not implement it fully due to an error which is present in the library. The error has been recorded in the below screenshot.

screenshot from 2018-02-15 20-44-03

Code after full implementation:

screenshot from 2018-02-15 20-51-07

I looked on the web and it seemed like that is a problem with the latest version but even downgrading the version did not make any difference.

Also, after partial implementation I could still see the "salt noise" (less than quantize_map() though) in the raw_image which might be an indication of the fact that even after full implementation we could still be left with those errors which are there in the current function.

debug

Hence I think it's better to give quantization as an option (though it increases the argument count... sadly :( )

@Abhinav95 Please see if I'm wrong somewhere, and suggest if there's a better way to go about this :)

@adarshshukla19
Copy link

@cfsmp3
I ran it on my tesseract and it ran just fine.
https://drive.google.com/open?id=1zf-Gb-v_vgMXXbQ0bgeC_1FYPCan-Qg2

@amitdo
Copy link

amitdo commented Feb 23, 2018

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Feb 23, 2018

@adarshshukla19 that link is not public

@adarshshukla19
Copy link

adarshshukla19 commented Feb 26, 2018 via email

@thealphadollar
Copy link
Contributor

thealphadollar commented Feb 26, 2018 via email

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Feb 26, 2018

@adarshshukla19 Issues are not yet solved, so yes, we're definitely going to continue working on this unless we get really reliable results.

@tsmarinov
Copy link

After the last commit results at my side are good but still have this french channel with terrible output:

./ccextractor -nofc -in=ts -datapid 0x8c3 -out=srt -stdout -nobom -trim -noteletext -codec dvbsub -dvblang fra -ocrlang fra ./merged/franceo.ts

  2%  |  00:001
00:00:00,200 --> 00:00:04,039
<font color="#00d300">[L@s ÜIIIÏÜSÏËS Œamç—afla dhflnols.</font>
<font color="#00d300">fl[lg WB@mm@mü,,</font>

  4%  |  00:042
00:00:04,240 --> 00:00:09,559
<font color="#00d300">…[kas</font>
<font color="#00d300">Œ[B®mü…m@sfl</font>

  6%  |  00:093
00:00:09,760 --> 00:00:12,719
<font color="#00d300">Mlëfiâ$‘flläläflüàfiàfllfi,läl®û</font>

 11%  |  00:204
00:00:20,240 --> 00:00:24,359
<font color="#00d300">©@ät…@ä@fiflâ</font>
<font color="#00d300">©m…@äfibz</font>

 17%  |  00:245
00:00:24,560 --> 00:00:34,319
<font color="#00d300">m@puæ5ærflcfl</font>
<font color="#00d300">MÊÆ>[ÈJ©ŒIÏŒ</font>

 19%  |  00:346
00:00:34,520 --> 00:00:37,799
<font color="#00d3d2">=AÿŒ]üu®fiüuääW</font>
<font color="#00d3d2">[@@ñfidŒflfim,</font>

 22%  |  00:387
00:00:38,000 --> 00:00:40,239
<font color="#00d3d2">[Lfl@'éeonom Maiao repose</font>

 24%  |  00:408
00:00:40,440 --> 00:00:42,919
<font color="#00d3d2">sun‘ [La uŒmæ @@ …ms,</font>

here are the matterials: https://goo.gl/kncQUn

@krushanbauva
Copy link
Contributor

The salt noise present in the images can be removed by the method of erosion and dilation.
You can refer to this link for more clarity (Sorry for it being highly mathematical in nature:disappointed_relieved: ).
You can see the images to get a clear picture of what happens when you apply a proper combination of these filters.

Original image:
filter3
Processed image:
filter3_processed
You can find the opencv implementation of erosion and dilation filters.

P.S.: I am not very familiar with the codebase or the tesseract API either, so I might take some time to implement it. Though if anyone wants to go ahead, this might help to solve it.

@thealphadollar
Copy link
Contributor

thealphadollar commented Mar 7, 2018

@krushanbauva I thought of implementing this but there are certain issues I was facing and hence, will be taking this up when I've little time in hand.

  • It's mathematically heavy and hence requires a lot of homework to be done and later a lot of testing so nothing breaks.

  • I need to analyze in depth how we are reading the images. It makes a lot of difference how the images are read and probably the way openCV does it and we do are drastically different though I believe at the basics they will be somewhat similar.

  • We cannot add openCV directly since that'll be a huge dependency we are not in (very) need of.

You can surely try to implement it, go through codebase and ask doubts. I'll look back into this when I've couple of days of time in hand. I spent around a week on this, so I can support you on the codebase part a bit :)

@krushanbauva
Copy link
Contributor

@thealphadollar

  • The implementation code for erosion and dilation was just meant to serve as a reference and to highlight the fact that it's not really difficult/mathematical as it seems to be.
  • Neither am I wanting to include openCV library(there's no need actually). It can be solved with tesseract itself(hopefully 😄)
  • I'm currently going through the codebase and the API, and working on this issue and will keep this forum updated as and when I progress.

@thealphadollar
Copy link
Contributor

Sounds amazing :) @krushanbauva

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Mar 8, 2018

Good luck @krushanbauva :-)

@cyberdrk
Copy link

cyberdrk commented Mar 8, 2018

I've got some prior experience in Tesseract and morphological operations, do you guys mind if I join in? :)

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Mar 8, 2018 via email

@krushanbauva
Copy link
Contributor

@cyberdrk You can go through the articles on the official CCExtractor's page which will get you started with the codebase and also going through the recent PR's give you a lot of intuition as to where things are. 😄
Also, there has been some activity on this part of the code in recent times, so that might help you big time!

P.S.: You are always welcomed to collaborate!! 😋

@amitdo
Copy link

amitdo commented Mar 8, 2018

Tesseract uses Leptonica for image IO and image processing.

@Saiteja31597
Copy link

i would like to work on this

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Mar 28, 2018 via email

@thealphadollar
Copy link
Contributor

@cyberdrk @krushanbauva Any leads you guys would like to share? I'm starting back my work on this.

@thealphadollar
Copy link
Contributor

@cfsmp3 For the past few days I have tried implementing some more libraries (including leptonica) but could not be successful; the problem mostly faced is the incorporation of the libraries without changing the structure of the png file we currently use.

Doing that will be, I believe, inefficient since we are already having three methods which work pretty much perfectly for most of the types of videos.

If I'm not wrong in terms of the compatibility of format, I think we can close the issue since we have already solved the problem this issue raised :)

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Apr 11, 2018

@thealphadollar png here is an output format, but this is totally unrelated with the OCR, which just takes a bitmap.

@OsamaNabih
Copy link

What happened to the suggestion of implementing dilation and erosion?

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Feb 7, 2020 via email

@cfsmp3
Copy link
Contributor Author

cfsmp3 commented Mar 22, 2023

Closing - confirmed fixed for the sample on the description. Great job @ziexess !

@cfsmp3 cfsmp3 closed this as completed Mar 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests