You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?
The text was updated successfully, but these errors were encountered:
Emmm... The main reason is that EAST uses a lightweight backbone network (PVANET).
Note that with the same backbone network (VGG16), TextField runs at 6.0 fps on ic15, which is on par with EAST (6.52 fps). Maybe you could try a faster backbone network.
And I think you do not need to retrain the model and could just try smaller inference scales first.
I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?
Hi, can you guide me what all changes did you make to run on your custom dataset?
Thanks
I managed to retrain your approach with my own dataset and it performance quit well! However, the runtime/inference speed seems to be slower than compared with several other approaches (EAST e.g.) - especially if ported to a non GPU version. To you have any hints/ideas on how to improve the inference speed? Could the model retrained differently to better fit smaller inference scales?
The text was updated successfully, but these errors were encountered: