Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is something wrong when I tried to adapt the vgg_cell dataset and other various questions. #7

Open
Nhatcao99 opened this issue Jan 25, 2021 · 5 comments

Comments

@Nhatcao99
Copy link

Hi, it me again. Thank you very much for updating the environment.
I want to have a few questions:
-I followed the instruction and was able to demo the HeLa image you left as example.
-So I get the vgg_cell dataset in the link and try to adapt it. After adaption I load the result weight model.h5 to demo to see how it work. The image is one of the image from vgg_cell set and the patch was crop from image and adjusted to 63x63. When I use demo it wasn't able to load the weight , it return "Cannot assign to variable patchnet_conv1/kernel:0 due to variable shape (7, 7, 3, 64) and value shape (64,) are incompatible". I didn't change anything except some data path, I used Resnet no_top weight mentioned in the code.
-I am a student and new learner to Image processing. I see that you note adapting using "dots" image. The dataset Vgg_cell have dots image with it and simcep tools was mentioned and it make sense cos it is cell so dot. But about the Carpk dataset and other bigger entities such as human or bigger. How do you get the dot images of the set for adaption? or did you use bounding boxes images?
Please help me with this
Thank you very much.

@erikalu
Copy link
Owner

erikalu commented Feb 1, 2021

Hi, can you try resizing the patch to 64x64? The adapted model actually uses a different patch size, sorry for the confusion.

To answer your question about the dot annotations, you have to convert any bounding box annotations to dot annotation form -- basically, a binary image the same size as the RGB image, with a value of 1 at the center location of every bounding box and 0 everywhere else. The objects shouldn't be 'bigger' because the method expects that the size of the object to count is roughly the same size as the exemplar patch (you may need to rescale the images accordingly to satisfy this assumption).

@Nhatcao99
Copy link
Author

Hi, thank you for replying again.
I appreciate the answer of dot image a lot. That clear my head up about architect.

I did resize the patch to 64x64 and actually I change the input size of "dataloader.py" of vgg_set to 256x256. But the problem doesn't go away. I will try to enlarge all vgg_set to 800x800 and train again to see it go away. If it doesn't go away I will ask again. If it does I will close the issue.

@Nhatcao99
Copy link
Author

The validation loss during adaption look right but after loading the model.h5 from adaptation, the problem during loading weight into demo doesn't goes away.
Even when adaptation only save weight. And both original gmn.h5 file and the model.h5 file is the same size, the same problem still there

@erikalu
Copy link
Owner

erikalu commented Feb 1, 2021

If you use the default config settings (in dataloader.py) to adapt to the VGG cell dataset, you will have to modify the demo script to make sure the inputs match the config settings you trained with. Can you confirm that these are the same?

You can share the trained weights and test images with me so I can take a look.

@Nhatcao99
Copy link
Author

Nhatcao99 commented Feb 1, 2021

Yes I changed the patch size.
Here is the weights, patch images I cropped (64 x 64) and the input Image (800 x 800) for testing.
https://drive.google.com/file/d/10JXFkJW_CeIjMwCgyvE-YVKHsz9vUyDy/view?usp=sharing
200_ex
200cell

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants