You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
error:
batch_size, l, d = x.size()
ValueError: too many values to unpack (expected 3)
x is the input image.
when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256]
However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable.
I do not know why there is difference between training and prediction.
The text was updated successfully, but these errors were encountered:
error:
batch_size, l, d = x.size()
ValueError: too many values to unpack (expected 3)
x is the input image.
when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256]
However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable.
I do not know why there is difference between training and prediction.
error:
batch_size, l, d = x.size()
ValueError: too many values to unpack (expected 3)
x is the input image.
when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256]
However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable.
I do not know why there is difference between training and prediction.
The text was updated successfully, but these errors were encountered: