Skip to content
This repository has been archived by the owner on Nov 3, 2022. It is now read-only.

[Bugfix] Fix assertion error when using DenseNetFCN with grayscale input + Add activation parameter #52

Closed
wants to merge 6 commits into from

Conversation

titu1994
Copy link
Contributor

@titu1994 titu1994 commented Mar 27, 2017

Fixes a assertion error when gray scale image is input to DenseNetFCN.

The submitted code is basically a copy of the _check_input code in Keras, with the removal of a few parts which cause the crash.

@titu1994
Copy link
Contributor Author

The tests are failing due to a "E303 too many blank lines (2)" in the PEP test which I have fixed. The other errors are due to Keras 2 code errors in test_deconvolution_3d and test_advanced_activations

@titu1994 titu1994 changed the title [Bugfix] Fix assertion error when using DenseNetFCN with grayscale input [Bugfix] Fix assertion error when using DenseNetFCN with grayscale input + Add activation parameter Mar 27, 2017
@titu1994
Copy link
Contributor Author

titu1994 commented Mar 27, 2017

I've added an activation parameter, which allows use of sigmoid activation when training a network on binary segmentation masks. This is more suitable than a class wise loss over individual pixels for binary masks.

Copy link
Collaborator

@ahundt ahundt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These look like very useful changes! Just a few items to address.

if include_top:
weights_path = get_file('densenet_40_12_th_dim_ordering_th_kernels.h5',
weights_path = get_file('densenet_40_12_th_data_format_th_kernels.h5',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you upload files to the new path? Same applies to other weight files

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I did not upload any new file to a new path. The path remains the same, just the name of the weight file changes when it is downloaded onto a local machine.

# Determine proper input shape
min_size = 2 ** nb_dense_block
input_shape = _obtain_input_shape(input_shape,
default_size=32,
min_size=16,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should min_size be passed here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems the code did not update. Will fix in next commit.

@@ -43,7 +43,7 @@ def DenseNet(depth=40, nb_dense_block=3, growth_rate=12, nb_filter=16, nb_layers
optionally loading weights pre-trained
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add activation parameter to DenseNet as well. Not only is it useful in regular DenseNet for the two class case, it will also be very useful when combined with Atrous DenseNet in #46.

@@ -406,7 +415,7 @@ def __transition_up_block(ip, nb_filters, type='upsampling', output_shape=None,
elif type == 'subpixel':
x = Conv2D(nb_filters, (3, 3), padding="same", kernel_regularizer=l2(weight_decay), activation='relu',
use_bias=False, kernel_initializer='he_uniform')(ip)
#x = Convolution2D(nb_filters, 3, 3, activation="relu", border_mode='same', W_regularizer=l2(weight_decay),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments shouldn't be here, and all calls should use the updated Keras-2 API... You might just want to make use of the similar changes in #46

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm keeping Keras 1 code for now, since it is cross compatible with Keras 2. Not the other way around.

I need to use Keras 1.2.2 for the time being, since I need many features which were abruptly removed for random reasons from Keras 2 (such as Batch norm mode 2 which I need for GANs, fbeta_score metrics, and a few bugs which have come up in 2 are making me wait for the shift).

I will update the code when Keras 2 stabilizes (bug free), and fixes some of its mistakes like Mode 2 batchnorm. Until then, better to keep code compatible with both versions.

@titu1994
Copy link
Contributor Author

Someone merged the Keras 2 api code changes to this, and so I will rewrite this PR. Also, who merged the changes related to concatenate when the master Keras is showing bugs with concatenate in DenseNet??

@titu1994 titu1994 closed this Mar 29, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants