You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems with the @shaing10 's changes, the returned vis_masks in the train() function are already batched.
If the batch_size=5, you will get a vis_mask of [5, 196, 200] returned by the DataLoader.
Next, Line 61 vis_mask = vis_mask.repeat((batch_size,)+(1,)*len(vis_mask.shape))
adds another dimension of batch_size and we end up with a [5, 5, 196, 200] tensor, the first dimension being just repeats.
Has anyone actually ran this change with a batch_size > 1?
I think the correct way is to replace Line 61 with: vis_mask = vis_mask.unsqueeze(1)
which turns vis_mask into a [5, 1, 196, 200] tensor.
The text was updated successfully, but these errors were encountered:
It seems with the @shaing10 's changes, the returned vis_masks in the train() function are already batched.
If the batch_size=5, you will get a vis_mask of [5, 196, 200] returned by the DataLoader.
Next, Line 61
vis_mask = vis_mask.repeat((batch_size,)+(1,)*len(vis_mask.shape))
adds another dimension of batch_size and we end up with a [5, 5, 196, 200] tensor, the first dimension being just repeats.
Has anyone actually ran this change with a batch_size > 1?
I think the correct way is to replace Line 61 with:
vis_mask = vis_mask.unsqueeze(1)
which turns vis_mask into a [5, 1, 196, 200] tensor.
The text was updated successfully, but these errors were encountered: