-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto encoder classification in Keras #8609
Comments
Is there anyway to train the autoencoder and classifier jointly? I think the representation can be optimized to give the best classification results instead of minimizing the reconstruction loss solely. |
Yep, you can "connect" features from discriminator (classifier) to decoder. Another approach provided with Adversarial Autoencoders: |
hello everyone, I have the same problem. I am trying to find a useful code for improving classification using autoencoder. I followed this example keras autoencoder vs PCA But not for MNIST data, I tried to use it with GTSR dataset. from keras.layers import Input, Dense import cv2 Single fully-connected neural layer as encoder and decoderuse_regularizer = True if use_regularizer: this is the size of our encoded representationsencoding_dim = 2048 # 32 floats -> compression factor 24.5, assuming the input is 784 floats this is our input placeholder; 784 = 28 x 28input_img = Input(shape=(1024, )) "encoded" is the encoded representation of the inputsencoded = Dense(encoding_dim, activation='relu', activity_regularizer=my_regularizer)(input_img) "decoded" is the lossy reconstruction of the inputdecoded = Dense(1024, activation='sigmoid')(encoded) this model maps an input to its reconstructionautoencoder = Model(input_img, decoded) Separate Encoder modelthis model maps an input to its encoded representationencoder = Model(input_img, encoded) Separate Decoder modelcreate a placeholder for an encoded (32-dimensional) inputencoded_input = Input(shape=(encoding_dim,)) retrieve the last layer of the autoencoder modeldecoder_layer = autoencoder.layers[-1] create the decoder modeldecoder = Model(encoded_input, decoder_layer(encoded_input)) Train to reconstruct MNIST digitsfrom keras import optimizers configure model to use a per-pixel binary crossentropy loss, and the Adadelta optimizer#autoencoder.compile(optimizer='adam', loss='binary_crossentropy' , metrics=['accuracy']) customAdam = optimizers.Adam(lr=0.001) #you have no idea how many times I changed this number prepare input data#(x_train, _), (x_test, y_test) = mnist.load_data() train = pd.read_pickle('./traffic-signs-data/train.p') x_train = [] for i in x_train1: (x_test1, y_test) = test['features'], test['labels'] normalize all values between 0 and 1 and flatten the 28x28 images into vectors of size 784x_train = np.array(x_train) Train autoencoder for 50 epochshistory = autoencoder.fit(x_train, x_train, epochs=my_epochs, batch_size=128, shuffle=True, validation_data=(x_test, x_test), after 50/100 epochs the autoencoder seems to reach a stable train/test lost valueVisualize the reconstructed encoded representationsencode and decode some digitsnote that we take them from the test setencoded_imgs = encoder.predict(x_test) evaluate the model_, train_acc = autoencoder.evaluate(x_train, x_train, verbose=0) plot loss during trainingpyplot.subplot(211) plot accuracy during training"""pyplot.subplot(212) save latent space features 32-d vectorpickle.dump(encoded_imgs, open(features_path, 'wb')) n = 6 # how many digits we will display
plt.show() #K.clear_session() here is the output
please somebody help me if you get the answer. |
Hello,
I have an issue i think it is from dimensions:
I am trying to find a useful code for improving classification using autoencoder. I followed this example keras autoencoder vs PCA But not for MNIST data, I tried to use it with cifar-10
so I made some changes but it seems like something is not fitting. Could any one please help me in this? if you have another example that can run in different dataset, that would help.
the validation in reduced.fit, which is (X_test,Y_test) is not learned, so it gives wronf accuracy in .evalute() always give val_loss: 2.3026 - val_acc: 0.1000 This is the code, and the error:
`
`
Here is the output
The text was updated successfully, but these errors were encountered: