-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
heavy unet tests - not working yet #327
Conversation
There appear to be some python formatting errors in 1813190. This pull request uses the [psf/black](https://github.com/psf/black) formatter to fix these issues.
related: #331 |
The Architecture superclass says that we should be returning coordinates for these properties This is important because we regularly use these values in arithmetic where we expect to execute +-*/ element wise
…re robust If we upsample, we probably want to apply a convolution to finetune the outputs rather than simply upsampling which we could do outside of a network. If we assume a kernel of size (3, 3, 3), it fails for 2D networks that process using kernels of size (1, 3, 3). We now just use the last kernel in the kernel size up. This is a bit more robust.
… during training Otherwise the BatchNorm breaks
This should probably just be switched to use `funlib.perisistence.prepare_ds`
Fixed a couple bugs 1) CNNectome UNet wasn't following the api defined by the Architecture `ABC`. It was supposed to be returning `Coordinate` class instances for voxel_size, input_shape etc. 2) CNNectome UNet had logical errors in the definition of kernel_size_up and down. 3) New tests was expecting multi channel data for the 2D UNet and single channel for the 3D unet despite always getting single channel data. 4) fix voxel_size attr in test fixtures
There appear to be some python formatting errors in dba261f. This pull request uses the [psf/black](https://github.com/psf/black) formatter to fix these issues.
Looks like we both fixed the dims error in different places. |
Just want to make a note here for the tests that are still failing they are of the following form:
|
i removed my change |
i will split the test and i will create a dataset for upsample unet
i agree with you but we need to figure a way to test if the model is training. because i remember we had a problem with distance without adding 4 dimension |
I think this was only after validation, we saw a strange jump in the loss. It was caused by putting the model in eval mode for validation, and then not switching it back after completing prediction. I think this went unnoticed because it only makes a difference for things like batchnorm, dropout, and other layers that have different behavior between train and eval. I'm pretty sure thats fixed now and I haven't seen this jump in the minimal tutorial anymore. We should be testing that our models train, but I think a better way to do this is on full examples, so we should have the minimal tutorial cover more cases (instance/semantic, 2D/3D, multi class/single class, single channel raw/multi channel raw) and test that the loss and validation metrics reach reasonable values |
I'm thinking something like this: Except instead of tracking time (which we could do), we would also track loss / validation scores for each of our tutorials This repo looks pretty tailor made to performance benchmarking, but we may be able to hack in the data we want to track, or find a similar/better way |
There appear to be some python formatting errors in 97abfa0. This pull request uses the [psf/black](https://github.com/psf/black) formatter to fix these issues.
Yes, if you have 3D data, you should be using 3D convolutions, even if you want a 2D model, you just use (1, 3, 3) as the kernel shape. Its still a |
this is necessary for getting compatibility with windows and macos
… class prediction tasks
If the kernel size of the task is set to 3, then this will cause errors if you are trying to use a model with input shape `(1, y, x)`. The head should just be mapping the penultimate layer embeddings into the appropriate dimensions for the task so it desn't need a size larger than 1.
…used anywhere else
I made a pull request onto this branch that has 2D and 3D data with appropriate models, as well as a psuedo 2D model on 3D data |
Simplified the parameterized train test, and added validation. Fixed bugs that were found
There appear to be some python formatting errors in 85a6dda. This pull request uses the [psf/black](https://github.com/psf/black) formatter to fix these issues.
thanks @pattonw! tests are successful now. we merge to main ? |
Yeah I think so |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #327 +/- ##
==========================================
+ Coverage 48.37% 55.14% +6.76%
==========================================
Files 184 184
Lines 6470 6514 +44
==========================================
+ Hits 3130 3592 +462
+ Misses 3340 2922 -418 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
By running dacapo i hit multiple bugs sor i created a heavier test which test most unet scenario
I think we need to make sure that all the unittest works + we need to have more realistic tests
@pattonw