-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic Annotation #2529
Comments
@glenn-jocher Can you help me with that? What can I do? |
@QuarTerll I don't maintain this repository, I maintain Ultralytics YOLOv3 and YOLOv5. |
@glenn-jocher ops, srrrrry... I was working on yolov5 recently. So I just remind your name when I get in trouble... Sorry to bother :< |
I found that the logs show "Run SiamMask Model" every time when my inference runs. So I guess the SiamMask model is to draw something on the image such as the bounding box. Then I just found that there is a model called SiamMask in the directory named serverless . So I am trying to deploy this model first then try to do the Automatic Annotation again. Hope it can work :) |
@nmanovic Could you please help me with that? What can I do? For now, my local "SiamMask Model" is still building. It already passed three hours and I cannot see any logs :< Am I in the right direction? |
I have finished deployed the "saic_vul" model in the directory named serverless/pytorch and It is still not working. On the local website of cvat, it shows that "Automatic annotation finished for task 17". However, there is still no annotations showed on my image dataset. So, @nmanovic, I see that you add a milestone hmmmmm... Does it mean that this function is not finished for the moment? |
@QuarTerll There are a couple of issues in the semi-automatic nuctl pipeline I will address them in my soon-to-be PR for the Automatic annotation probably by tomorrow. The major one is that make sure to install nuctl 1.4.8 for now, until I update the documentation and boost the version to new versions. If you are still having problems I suggest wait a couple of days, I will ping you here when I am done. |
I had a problem with nuctl 1.4.8 while deploying functions, but was able to do with 1.5.7 However, models didn't appear @ http://localhost:8080/models I have also submitted the full issue at Issue#2541 Can you please help me on this as well? Is it something related with version issues? |
@beep-love yes, most likely. To debug the functions, you can use nuclio dashboard at localhost:8070 make sure the function is up and running there. Wait by tonight I will do a PR |
Changed the line to ### image: quay.io/nuclio/dashboard:1.5.7-amd64 and rebuild the container. Also, I have a running instance of the function in my nuclio dashboard Still the same error! I have renamed my nuclio release file from nuctl-1.5.7-linux-amd64 to nuctl-1.5.7 Does that make any difference to this relating problem? |
How do you deploy your function?
Put a screenshot from your |
@QuarTerll it might be due to the label. It might be an int instead of 0.0 .double check that |
Same here. Able to finish the automatic annotation, but no actual annotation is displayed after complete and click into the job. While the result of using AI tools is fine. It seems there is a problem in saving the results of automatic annotation. |
@jahaniam Thanks for your help. And another question: |
looks ok to me. I don't know why it's not working for you. Try without sudo and give it |
@leemengxing I am trying to do this semi-anno myself. I will do the inference and save the results by some type of dataset and load them to the CVAT platform and then do the annotations. |
@leemengxing @jahaniam @gen-ko @beep-love I have finished the semi-auto annotation myself. Here is the way that may have a little help.
Here is my python code for example.
|
I am also seeing that automatic annotation does not get displayed in the images if you try to do it for a whole task, but it seems to work if you run the annotation on the images individually. I added in the python code for the annotation function to print out the result.
The result looks fine so it seems like it must be an issue receiving or saving the multiple requests. Even exporting annotations does not show anything so somehow the result from the detector is lost |
@Inquisitive-ME How do you do to run the automatic annotation for a single image? Maybe my version of CVAT is not the latest? |
I can confirm automatic annotation is broken for tasks. It only works for single images I have tried two models. Faster rcnn and mask rcnn. It works fine for a single image but when I use it on a task containing multiple images although it shows it has completed but it doesn't show any annotation results on the images. |
Same problem documented here, possible duplication #2644 F-RCNN on CPU |
The |
The |
I have deployed the Open VINO versions of F-RCNN and Mask RCNN to my cluster. NO LUCK. |
Regardless, both TF F-RCNN and OpenVINO F-RCNN do not work as a bulk annotation on a Task. Somehow OpenVINO F-RCNN works on cvat.org for person detection. Perhaps they have a different function.yaml? |
@turowicz Your information was really helpful. I investigated why cvat.org automatic annotation doesn't work for me. I realized if a task is assigned to a project this feature fails, otherwise works fine. Can you also try it on the develop branch and create a task without assigning it to a project and see if bulk annotation works? |
@jahaniam woop woop! You're right! After removing all the projects on my k8s, the automatic annotation works on Tasks! |
@jahaniam also cancelling automated tasks is broken |
Yes this is how it works, on |
@turowicz , we will look at the problem after public holidays in Russia. Sorry for the experience. "I realized if a task is assigned to a project this feature fails, otherwise works fine" - it looks like a regression. @ActiveChooN , could you please look? |
Hi, I was also using tf-faster rcnn for automatic annotation. I was able to deploy the function and run auto annotation using nuclio 1.5.8 and clearing all other error functions at nuclio dashboard. Also, i had to make a change in yaml file in the line no. of worker for 2 to 1 . I ran auto annotation. It ran well in the first three videos of length less than 3 mins for traffic annotation. The later 3 was also able to run the annotation till the end with a success message and few error message. Attaching the screenshot here: https://drive.google.com/file/d/1B8yvmU4KR6K_-QMKv_B0wmI1vjYT6K0M/view?usp=sharing |
Also, The later videos were of length more than 5 mins. So I will again check with the shorter videos if it throws an error or not. In my opinion, this issue might be related to any runtime parameters in the YAML file. |
@ActiveChooN Would also be nice to support changing project ID of a task through the API or UI. |
I think the best way to optimize for now is to do it outside and create a task and upload annotation using API. see https://github.com/openvinotoolkit/cvat/tree/develop/utils/cli or localhost:8080/api/swagger |
OK so issues identified:
Anything else? How about the fact that OpenVINO F-RCNN requires TF-RCNN to co-exist? |
Nuclio functions are independent of each other hence OpenVINO F-RCNN requires TF-RCNN to co-exist is wrong. these are two independent functions. The only reason I said to go with TF-RCNN is that I debug and tested it myself.
|
|
Thanks @jahaniam. Appreciate all your work. I can confirm that it works for Tasks not tied to a project. Do you have a ball-park ETA on solving this for tasks tied to a project? |
Guess problem with auto-annotation of the task in project should be fixed with @2725.
@jahaniam, do you mean moving task between projects? It's quite complicated task and will be implemented in future releases with developing project feature. |
@ActiveChooN There is a project id field for tasks. Wouldn’t changing it change the task project? |
@jahaniam, it would, but there is annotation in the task that depend on project labels. So we need merge annotation with new label somehow before moving the task. |
This is issue of semi annotation by model is fixed ? And where can I find instructions how to configure for semi annotation? Thanks |
I deployed my custom model for automatic annotation and it seems perfect.
It shows the inference progress bar and the docker logs show normal.
However, the Annotation did not show on my image dataset in CVAT.
What can I do?
More info
The following is what I send to CVAT. In other words, it is the context.Response part.
<class 'list'>---[{'confidence': '0.4071217', 'label': '0.0', 'points': [360.0, 50.0, 1263.0, 720.0], 'type': 'rectangle'}]
The text was updated successfully, but these errors were encountered: