You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 10, 2023. It is now read-only.
Retrieval script
I was wondering if you are going to release the retrieval script any time soon?
Autoencoder for getting image embeddings for retrieval:
What is the exact architecture of this autoencoder? Is the encoder and decoder the same as the encoder and generator used in TIM-GAN.
could you please explain the process of retrieval pleas, in particular, we have an autoencoder, made of an encoder E1 and decoder D1.
then we pretrain this autoencoder on the dataset. Could you tell me the exact pertaining process, loss functions etc? Can we use the run_pretrain.sh script for training the autoencoder?
While calculating recall for your method, this is written in the paper:
but how do we calculate the recall for other methods? i.e what is the encoder used in that case?
In my understanding, it should have been that there is a separate autoencoder trained on the dataset, which does not have anything to do with TIM-GAN, or any of the other methods, and then after all the models are trained and they are able to generate images, we can use this pretrained autoencoder to get the image representations of the generated images, and use it as a query.
could you tell me if this is happening in the paper, or if not then what is the exact process, because I want to calculate the metrics for these methods on my side.
Thanks you in advance, for the help
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
It was great to read your paper.
Retrieval script
I was wondering if you are going to release the retrieval script any time soon?
Autoencoder for getting image embeddings for retrieval:
What is the exact architecture of this autoencoder? Is the encoder and decoder the same as the encoder and generator used in TIM-GAN.
could you please explain the process of retrieval pleas, in particular, we have an autoencoder, made of an encoder E1 and decoder D1.
then we pretrain this autoencoder on the dataset. Could you tell me the exact pertaining process, loss functions etc? Can we use the run_pretrain.sh script for training the autoencoder?
While calculating recall for your method, this is written in the paper:
but how do we calculate the recall for other methods? i.e what is the encoder used in that case?
In my understanding, it should have been that there is a separate autoencoder trained on the dataset, which does not have anything to do with TIM-GAN, or any of the other methods, and then after all the models are trained and they are able to generate images, we can use this pretrained autoencoder to get the image representations of the generated images, and use it as a query.
could you tell me if this is happening in the paper, or if not then what is the exact process, because I want to calculate the metrics for these methods on my side.
Thanks you in advance, for the help
The text was updated successfully, but these errors were encountered: