This repository contains Tensorflow code for an Auto-Encoder architecture built with Residual Blocks.
model = ResNetAE(input_shape=(256, 256, 3),
n_ResidualBlock=8,
n_levels=4,
z_dim=128,
bottleneck_dim=128,
bUseMultiResSkips=True)
input_shape
: A tuple defining the input image shape for the modeln_ResidualBlock
: Number of Convolutional residual blocks at each resolutionn_levels
: Number of scaling resolutions, at each increased resolution, the image dimension halves and the number of filters channel doublesz_dim
: Number of latent dim filtersbottleneck_dim
: AE/VAE vectorized latent space dimensionbUseMultiResSkips
: At each resolution, the feature maps are added to the latent/image output (green path in diagram)
The encoder expects a 4-D Image Tensor in the form of [Batch x Height x Width x Channels]
. The output z
would be of shape [Batch x Height/(2**n_levels) x Width/(2**n_levels) x z_dim]
.
encoder = ResNetEncoder(n_ResidualBlock=8,
n_levels=4,
z_dim=10,
bUseMultiResSkips=True)
N.B. It is possible to flatten z
by tf.layers.dense
for a vectorised latent space, as long as the shape is preserved for the decoder during the unflatten process.
The decoder expects a 4-D Feature Tensor in the form of [Batch x Height x Width x Channels]
. The output x_out
would be of the shape [Batch x Height*(2**n_levels) x Width*(2**n_levels) x output_channels]
decoder = ResNetDecoder(n_ResidualBlock=8,
n_levels=4,
output_channels=3,
bUseMultiResSkips=True)
The Residual Block uses the Full pre-activation ResNet Residual block by He et al.
TODO: implementation changed to Conv-Batch-Relu, update figure
If you find this work useful for your research, please cite:
@article{ResNetAE,
Title={{R}es{N}et{AE}-https://github.com/farrell236/ResNetAE},
url={https://github.com/farrell236/ResNetAE},
Author={Hou, Benjamin},
Year={2019}
}