Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run resnet_34_8s_demo.ipynb #11

Open
sharanry opened this issue Apr 1, 2018 · 8 comments
Open

Unable to run resnet_34_8s_demo.ipynb #11

sharanry opened this issue Apr 1, 2018 · 8 comments

Comments

@sharanry
Copy link

sharanry commented Apr 1, 2018

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, int, numpy.int64, numpy.int64), but expected one of:
 * no arguments
 * (int ...)
      didn't match because some of the arguments have invalid types: (int, int, numpy.int64, numpy.int64)
 * (torch.FloatTensor viewed_tensor)
 * (torch.Size size)
 * (torch.FloatStorage data)
 * (Sequence data)

Which version of pytorch is to be used?

@warmspringwinds
Copy link
Owner

@sharanry Hi,

It should work with the latest available version,

Could you, please, give me the line where exception occurred ?

@superoil1983
Copy link


TypeError Traceback (most recent call last)
in ()
----> 1 fcn = resnet_dilated.Resnet34_8s(num_classes=21)
2 fcn.load_state_dict(torch.load('resnet_34_8s_66.pth'))
3 fcn.cuda()
4 fcn.eval()

related lines:

/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in init(self, inplanes, planes, stride, downsample, dilation)
---> 45 self.conv1 = conv3x3(inplanes, planes, stride, dilation=dilation)
46 self.bn1 = nn.BatchNorm2d(planes)

/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in conv3x3(in_planes, out_planes, stride, dilation)
36 return nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
---> 37 padding=full_padding, dilation=dilation, bias=False)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py in init(self, in_channels, out_channels, kernel_size, stride, padding, dilation, transposed, output_padding, groups, bias)
32 self.weight = Parameter(torch.Tensor(
---> 33 out_channels, in_channels // groups, *kernel_size))
34 if bias:

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, int, numpy.int64, numpy.int64), but expected one of:
...

Probably the same error.

@coolcoder001
Copy link

coolcoder001 commented Apr 3, 2018

Probably the same error :
fcn = resnet_dilated.Resnet18_8s(num_classes=2)
throwing me error like this :

TypeError                                 Traceback (most recent call last)
<ipython-input-3-61af6330ec33> in <module>()
----> 1 fcn = resnet_dilated.Resnet18_8s(num_classes=2)

/home/xxxx/pytorch-segmentation-detection/pytorch_segmentation_detection/models/resnet_dilated.py in __init__(self, num_classes)
     54                                       pretrained=True,
     55                                       output_stride=8,
---> 56                                       remove_avg_pool_layer=True)
     57 
     58         # Randomly initialize the 1x1 Conv scoring layer

/home/xxxx/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in resnet18(pretrained, **kwargs)
    224         pretrained (bool): If True, returns a model pre-trained on ImageNet
    225     """
--> 226     model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
    227     if pretrained:
    228         model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))

/home/xxxx/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in __init__(self, block, layers, num_classes, fully_conv, remove_avg_pool_layer, output_stride)
    141         self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    142 
--> 143         self.layer1 = self._make_layer(block, 64, layers[0])
    144         self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
    145         self.layer3 = self._make_layer(block, 256, layers[2], stride=2)

/home/xxxx/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in _make_layer(self, block, planes, blocks, stride, dilation)
    189 
    190         layers = []
--> 191         layers.append(block(self.inplanes, planes, stride, downsample, dilation=self.current_dilation))
    192         self.inplanes = planes * block.expansion
    193         for i in range(1, blocks):

/home/xxxx/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in __init__(self, inplanes, planes, stride, downsample, dilation)
     43     def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
     44         super(BasicBlock, self).__init__()
---> 45         self.conv1 = conv3x3(inplanes, planes, stride, dilation=dilation)
     46         self.bn1 = nn.BatchNorm2d(planes)
     47         self.relu = nn.ReLU(inplace=True)

/home/xxxx/pytorch-segmentation-detection/vision/torchvision/models/resnet.py in conv3x3(in_planes, out_planes, stride, dilation)
     35 
     36     return nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
---> 37                      padding=full_padding, dilation=dilation, bias=False)
     38 
     39 

~/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py in __init__(self, in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias)
    276         super(Conv2d, self).__init__(
    277             in_channels, out_channels, kernel_size, stride, padding, dilation,
--> 278             False, _pair(0), groups, bias)
    279 
    280     def forward(self, input):

~/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py in __init__(self, in_channels, out_channels, kernel_size, stride, padding, dilation, transposed, output_padding, groups, bias)
     31         else:
     32             self.weight = Parameter(torch.Tensor(
---> 33                 out_channels, in_channels // groups, *kernel_size))
     34         if bias:
     35             self.bias = Parameter(torch.Tensor(out_channels))

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, int, numpy.int64, numpy.int64), but expected one of:
 * no arguments
 * (int ...)
      didn't match because some of the arguments have invalid types: (int, int, numpy.int64, numpy.int64)
 * (torch.FloatTensor viewed_tensor)
 * (torch.Size size)
 * (torch.FloatStorage data)
 * (Sequence data)

@warmspringwinds
Copy link
Owner

@coolcoder001 @superoil1983 @sharanry -- that's a bug that appeared after the new version of pytorch came out. Will resolve it soon. Thank you for reporting guys :)

@warmspringwinds
Copy link
Owner

@coolcoder001 @superoil1983 @sharanry

I have just checked the file resnet_34_8s_benchmark with pytorch version of 0.3.1 and python 2
installed in anaconda 2 and it seems to work fine.

Could you please provide the python version/pytorch version and python version that you guys are using when experiencing this problem?

Thank you.

@warmspringwinds
Copy link
Owner

I have also checked it on the latest unstable version of pytorch and python2 -- it works.

My hypothesis is that this problem is related to python 3 which we don't support yet.
Should work on python 2 -- let me know if you have any further problems.

@vishalanand
Copy link

vishalanand commented Jul 4, 2018

@warmspringwinds Could you please put in the requirements.txt file as well?
Will clear out a lot of confusions.

pip freeze > requirements.txt

P.S. In my case, the errors are regarding unknown arguments for ResNet initialization. Tried on multiple cloud / GPU setups with same output. Maybe there are unchecked files (there could be cached files in your setups. On a clean repo pull, these errors might occur for you as well)
[python2.7, pytorch-0.3.1]:

TypeErrorTraceback (most recent call last)
<ipython-input-1-a4cbdc8e5706> in <module>()
     36 
     37 print(torch.__version__)
---> 38 fcn = resnet_dilated.Resnet34_8s(num_classes=21)
     39 fcn.load_state_dict(torch.load('resnet_34_8s_68.pth'))
     40 #fcn.load_state_dict(torch.load('resnet34-333f7ec4.pth'))

/models/pytorch-segmentation-detection/pytorch_segmentation_detection/models/resnet_dilated.py in __init__(self, num_classes)
    290         # Load the pretrained weights, remove avg pool
    291         # layer and get the output stride of 8
--> 292         resnet34_8s = models.resnet34(fully_conv=True, pretrained=True, output_stride=8, remove_avg_pool_layer=True)
    293         #resnet34_8s = models.resnet34(pretrained=True)
    294 

/usr/local/lib/python2.7/dist-packages/torchvision/models/resnet.pyc in resnet34(pretrained, **kwargs)
    172         pretrained (bool): If True, returns a model pre-trained on ImageNet
    173     """
--> 174     model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
    175     if pretrained:
    176         model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))

TypeError: __init__() got an unexpected keyword argument 'fully_conv'

Also, if you could also put in the resnet_34_8_66.pth, it will be helpful to execute the code without changes in resnet_34_8s_demo.ipynb

Many thanks!

@vakkov
Copy link

vakkov commented Jun 10, 2019

The way I got it to work in Python 3 was to change:

--- a/pytorch_segmentation_detection/models/resnet_dilated.py +++ b/pytorch_segmentation_detection/models/resnet_dilated.py @@ -1,6 +1,9 @@ import numpy as np import torch.nn as nn -import torchvision.models as models +import sys +#sys.path.insert(0, '/home/vakko/Downloads/disso/pytorch-segmentation-detection/vision/') +import vision.torchvision.models as models +print(models.__file__)
aaand to import torchvision in the same way in the notebook; in my case - multiclass_resnet_18_8s_train:

  • "import torchvision\n",
  • "import vision.torchvision as torchvision\n"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants