-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is something wrong when I tried to adapt the vgg_cell dataset and other various questions. #7
Comments
Hi, can you try resizing the patch to 64x64? The adapted model actually uses a different patch size, sorry for the confusion. To answer your question about the dot annotations, you have to convert any bounding box annotations to dot annotation form -- basically, a binary image the same size as the RGB image, with a value of 1 at the center location of every bounding box and 0 everywhere else. The objects shouldn't be 'bigger' because the method expects that the size of the object to count is roughly the same size as the exemplar patch (you may need to rescale the images accordingly to satisfy this assumption). |
Hi, thank you for replying again. I did resize the patch to 64x64 and actually I change the input size of "dataloader.py" of vgg_set to 256x256. But the problem doesn't go away. I will try to enlarge all vgg_set to 800x800 and train again to see it go away. If it doesn't go away I will ask again. If it does I will close the issue. |
The validation loss during adaption look right but after loading the model.h5 from adaptation, the problem during loading weight into demo doesn't goes away. |
If you use the default config settings (in dataloader.py) to adapt to the VGG cell dataset, you will have to modify the demo script to make sure the inputs match the config settings you trained with. Can you confirm that these are the same? You can share the trained weights and test images with me so I can take a look. |
Yes I changed the patch size. Thank you! |
Hi, it me again. Thank you very much for updating the environment.
I want to have a few questions:
-I followed the instruction and was able to demo the HeLa image you left as example.
-So I get the vgg_cell dataset in the link and try to adapt it. After adaption I load the result weight model.h5 to demo to see how it work. The image is one of the image from vgg_cell set and the patch was crop from image and adjusted to 63x63. When I use demo it wasn't able to load the weight , it return "Cannot assign to variable patchnet_conv1/kernel:0 due to variable shape (7, 7, 3, 64) and value shape (64,) are incompatible". I didn't change anything except some data path, I used Resnet no_top weight mentioned in the code.
-I am a student and new learner to Image processing. I see that you note adapting using "dots" image. The dataset Vgg_cell have dots image with it and simcep tools was mentioned and it make sense cos it is cell so dot. But about the Carpk dataset and other bigger entities such as human or bigger. How do you get the dot images of the set for adaption? or did you use bounding boxes images?
Please help me with this
Thank you very much.
The text was updated successfully, but these errors were encountered: