Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Models (CNN's) #8

Closed
4 tasks done
MarijnJABoer opened this issue Feb 7, 2018 · 4 comments
Closed
4 tasks done

Models (CNN's) #8

MarijnJABoer opened this issue Feb 7, 2018 · 4 comments
Assignees
Labels

Comments

@MarijnJABoer
Copy link
Member

MarijnJABoer commented Feb 7, 2018

  • Choose the first NN to test
    • Inception v3!
  • Place all models in a class, with a flag for choosing a model
  • general compile function for all models.
  • finetune inception
@MarijnJABoer MarijnJABoer self-assigned this Feb 12, 2018
@MarijnJABoer
Copy link
Member Author

Models that can be loaded are:

  • ResNet
  • DenseNet
  • Inception v3
  • Xception
  • Self designed network (model_formicID)

@MarijnJABoer MarijnJABoer changed the title Create neural networks Models (CNN's) Feb 15, 2018
@MarijnJABoer
Copy link
Member Author

Implement multi GPU model

import tensorflow as tf
from keras.applications import Xception
from keras.utils import multi_gpu_model
import numpy as np

num_samples = 1000
height = 224
width = 224
num_classes = 1000

# Instantiate the base model (or "template" model).
# We recommend doing this with under a CPU device scope,
# so that the model's weights are hosted on CPU memory.
# Otherwise they may end up hosted on a GPU, which would
# complicate weight sharing.
with tf.device('/cpu:0'):
    model = Xception(weights=None,
                     input_shape=(height, width, 3),
                     classes=num_classes)

# Replicates the model on 8 GPUs.
# This assumes that your machine has 8 available GPUs.
parallel_model = multi_gpu_model(model, gpus=8)
parallel_model.compile(loss='categorical_crossentropy',
                       optimizer='rmsprop')

# Generate dummy data.
x = np.random.random((num_samples, height, width, 3))
y = np.random.random((num_samples, num_classes))

# This `fit` call will be distributed on 8 GPUs.
# Since the batch size is 256, each GPU will process 32 samples.
parallel_model.fit(x, y, epochs=20, batch_size=256)

# Save model via the template model (which shares the same weights):
model.save('my_model.h5')

https://keras.io/utils/#multi_gpu_model

@MarijnJABoer
Copy link
Member Author

MarijnJABoer commented Feb 15, 2018

ResNet. Residual Network developed by Kaiming He et al. was the winner of ILSVRC 2015. It features special skip connections and a heavy use of batch normalization. The architecture is also missing fully connected layers at the end of the network. The reader is also referred to Kaiming’s presentation (video, slides), and some recent experiments that reproduce these networks in Torch. ResNets are currently by far state of the art Convolutional Neural Network models and are the default choice for using ConvNets in practice (as of May 10, 2016). In particular, also see more recent developments that tweak the original architecture from Kaiming He et al. Identity Mappings in Deep Residual Networks (published March 2016).

http://cs231n.github.io/convolutional-networks/

@MarijnJABoer MarijnJABoer mentioned this issue Feb 20, 2018
33 tasks
@MarijnJABoer MarijnJABoer added this to the Network building milestone Feb 26, 2018
@MarijnJABoer
Copy link
Member Author

Inception v3 is the chosen model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant