You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running custom images through this model. All the images have been pre-proceed and cropped as to just include the human, and I've removed all annotation files and hardcoded the information that the annotation files used to provide. Also, I'm only processing images where there is a single person in-frame.
For some reason, half of the output predictions are just wrong--they are a mess. The other half of the output predictions look perfect. The wrong outputs are almost all identical with only slight, barely noticeable differences in joint positions. Also, If I feed in, say, a folder of 1000 images, the predictions on images 1-64 will be perfect, 65-128 will be wrong, 129-192 will be perfect, and 193-256 will be wrong, and this pattern continues. This pattern remains the same regardless of the input data.
Any idea why this is happening? I'm happy to provide more info about the issue. Thanks.
The text was updated successfully, but these errors were encountered:
I actually just got it to work by lowering the batch size. I can no longer run the default 128 batch size without getting CUDA memory allocation issues; I'm not sure how/why it worked before, or why the batch size of 128 would cause the output to be wrong.
After lowering the batch size from 128 to 50, the program worked for some data. However, for other data, half of the output predictions are still wrong. It will produce 25 correct skeletons, then 25 wrong skeletons, and repeat this pattern. This seems to occur for all batch sizes.
I am running custom images through this model. All the images have been pre-proceed and cropped as to just include the human, and I've removed all annotation files and hardcoded the information that the annotation files used to provide. Also, I'm only processing images where there is a single person in-frame.
For some reason, half of the output predictions are just wrong--they are a mess. The other half of the output predictions look perfect. The wrong outputs are almost all identical with only slight, barely noticeable differences in joint positions. Also, If I feed in, say, a folder of 1000 images, the predictions on images 1-64 will be perfect, 65-128 will be wrong, 129-192 will be perfect, and 193-256 will be wrong, and this pattern continues. This pattern remains the same regardless of the input data.
Any idea why this is happening? I'm happy to provide more info about the issue. Thanks.
The text was updated successfully, but these errors were encountered: