Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run it for our own set of images/video ? #12

Open
manansaxena opened this issue Jun 21, 2019 · 15 comments
Open

How to run it for our own set of images/video ? #12

manansaxena opened this issue Jun 21, 2019 · 15 comments

Comments

@manansaxena
Copy link

Hi,
I love the work you have done.
But I wanted to run it on my own set of images or video ?
How should I do that?

@liruilong940607
Copy link
Owner

Thanks for your interests.

If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.

Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.

@manansaxena
Copy link
Author

Thanks for the reply
One more thing -
I have three people in the video and I want to create a binary mask i.e. at one time only one person can be seen in the video.
So where should I make changes in the code to make this possible.
Thanks

@liruilong940607
Copy link
Owner

I think you need a video post-processing process after you get the segmentation mask.

You can easily implement this using some opencv functions.

@wine3603
Copy link

Thanks for the reply
One more thing -
I have three people in the video and I want to create a binary mask i.e. at one time only one person can be seen in the video.
So where should I make changes in the code to make this possible.
Thanks

Do you want to track the human objects in the video?
If so I think you need to add some tracking tips.
Human poses segmentation maybe different frame to frame.
I am interested in this topic, too.

@marteiro
Copy link

marteiro commented Jul 4, 2019

This is amazing work.
I'm trying to sun the test, investigate the result and later try to apply this to a live video feed. Right now I was able to run test.py with the paper's final weight but I could't seen any result files.

Screenshot from 2019-07-04 09-10-57

It this right? How can I see the results?

Thanks ,

@manansaxena
Copy link
Author

manansaxena commented Jul 8, 2019

Thanks for the reply
One more thing -
I have three people in the video and I want to create a binary mask i.e. at one time only one person can be seen in the video.
So where should I make changes in the code to make this possible.
Thanks

Do you want to track the human objects in the video?
If so I think you need to add some tracking tips.
Human poses segmentation maybe different frame to frame.
I am interested in this topic, too.

Hey,
Were you able to solve the problem of converting openpose keypoints to coco format ?

@wine3603
Copy link

Thanks for the reply
One more thing -
I have three people in the video and I want to create a binary mask i.e. at one time only one person can be seen in the video.
So where should I make changes in the code to make this possible.
Thanks

Do you want to track the human objects in the video?
If so I think you need to add some tracking tips.
Human poses segmentation maybe different frame to frame.
I am interested in this topic, too.

Hey,
Were you able to solve the problem of converting openpose keypoints to coco format ?

I use openpose to estimate the joints and output in coco json format, then I made a new test data set in the form of OCHuman to run the test, but there are too many bugs.
Does anyone succeed to run on your only image?

@manansaxena
Copy link
Author

No I have done the same thing but new bugs just keep coming up.

@manansaxena
Copy link
Author

manansaxena commented Jul 12, 2019

@liruilong940607
Hi,
I formed a json file with coco format. I have also arranged the data just like in the original documentation.
But when I run test.py it gets an error that -
x1, y1, bboxw, bboxh = obj['bbox']
ValueError: not enough values to unpack (expected 4, got 0)

You said to leave these fields blank in some other issue:
'area', 'iscrowd', 'segmentation', 'bbox', because we don't use them for inference.
1)What did you mean by blank ?
2)I only need inference(just the segmented output image and masks), so for this what changes are needed to be done in test.py

Thanks

@ManiaaJia
Copy link

@liruilong940607 @manansaxena @wine3603 @marteiro
Hello everyone, I am sorry to bother you.But I can't access the dataset address provided by the dataset repo. Could you download it for me or give me a link to download? Thanks a lot.

@azuic
Copy link

azuic commented Oct 30, 2019

@manansaxena hi! Do you mind sharing your final input data format? I got problems with category_id, which returns me a KeyError: 1 both when I set category_id to 1 in the input annotation file and when I remove this key from the file. Should I set the value to None instead? Thanks!

@azuic
Copy link

azuic commented Oct 30, 2019

I fixed my issue above, but I am getting the same 'bbox' warning as you have, do you mind sharing how you solved this problem?

@manansaxena
Copy link
Author

I made a lot of changes in Cocodatasetinfo.py, remove all the bbox code from it. But then again I got very poor results on in the wild images, I can't say for sure that it isn't the problem, even though according to paper for inference it shouldn't be required

@pixelite1201
Copy link

Did anyone manage to run it on their own set of images?

@ziBLan
Copy link

ziBLan commented Apr 27, 2020

Did anyone manage to run it on their own set of images?

I am trying ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants