Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference speed #2

Open
juicebox18 opened this issue Mar 18, 2019 · 2 comments
Open

Inference speed #2

juicebox18 opened this issue Mar 18, 2019 · 2 comments

Comments

@juicebox18
Copy link

I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?

@YukangWang
Copy link
Owner

Emmm... The main reason is that EAST uses a lightweight backbone network (PVANET).

Note that with the same backbone network (VGG16), TextField runs at 6.0 fps on ic15, which is on par with EAST (6.52 fps). Maybe you could try a faster backbone network.

And I think you do not need to retrain the model and could just try smaller inference scales first.

@QuickLearner171998
Copy link

I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?

Hi, can you guide me what all changes did you make to run on your custom dataset?
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants