You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I understand correctly, the "test" keyword data should be used to evaluate the model. The annotations are present in the training and validation sets, but for test data the annotations look like this: { "metadata": { "page_count": 1, "page_sizes_at_200dpi": [ [ 1704, 2203 ] ] } }
As you can see there are no bounding boxes, field names or values. How to get the annotated test data?
The text was updated successfully, but these errors were encountered:
@Davo00 , to avoid research overfitting to the test set, the test set is indeed shared without annotations. Instead, you can evaluate your post-competition submissions at https://rrc.cvc.uab.es/?ch=26 .
@sulc thanks for the quick response, I have registered and tried uploading the sample provided on the web page. This resulted into an error that the ground truth path is not found. Is there an issue or is it supposed to be like that?
Hi @Davo00, thank you for the report, I confirm the submissions are currently broken. I have contacted the admins of the Robust Reading Competition portal to help me fix this.
Hello @mskl, @simsa-st
If I understand correctly, the "test" keyword data should be used to evaluate the model. The annotations are present in the training and validation sets, but for test data the annotations look like this:
{ "metadata": { "page_count": 1, "page_sizes_at_200dpi": [ [ 1704, 2203 ] ] } }
As you can see there are no bounding boxes, field names or values. How to get the annotated test data?
The text was updated successfully, but these errors were encountered: