You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that the given code is specifically for the mpii dataset.
It would be great if you could share some insights on replicating numbers for other tasks mentioned in the paper.
The text was updated successfully, but these errors were encountered:
Sure. In order to reproduce our results, you will need to extract every 10th frame from the videos, from each such frame get a vgg19 representation (using the penultimate layer) and then calculate rnn-fv for every sequence of consecutive N frames (where N is a hyperparameter, we checked 4,8 and 16). Once you have those representations, you can split to train/validation and test following each benchmark protocol and use the code in this repository.
It seems that the given code is specifically for the mpii dataset.
It would be great if you could share some insights on replicating numbers for other tasks mentioned in the paper.
The text was updated successfully, but these errors were encountered: