-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One question #1
Comments
Hello, It's been a while since I did this, so I don't have the details that clear. In the paper they extracted See this line here: Likwise, it is used again in the domain transformation: I chose to make the domain and label classifiers both work with the same dimensionality via this parameter, so that when concatenating ( Either case, that is was my choice, I don't recall the paper specifically mentioning dimensions. I hope this clears a bit the param. |
Thank you for your help! When I run your example CrossGrad.ipynb,the result only has loss, but does not have accuracy, so how can I get the accuracy? And can the code only apply to the domain number is 2?
Best wishes!
发送自 Windows 10 版邮件应用
发件人: Guillem Pascual Guinovart
发送时间: 2020年12月23日 20:14
收件人: gpascualg/CrossGrad
抄送: BX-xb; Author
主题: Re: [gpascualg/CrossGrad] One question (#1)
Hello,
It's been a while since I did this, so I don't have the details that clear. In the paper they extracted $\theta^1_d$ from the latent space, but they don't define its dimensionality. The parameter latent_space_dimensions control exactly that, the shape of this variable. It's also the dimension of the "embedding", the very last layer before the logits.
See this line here:
https://github.com/gpascualg/CrossGrad/blob/master/crossgrad.py#L21
The output of the latent space is [batch, latent_space_dimensions]
Likwise, it is used again in the domain transformation:
https://github.com/gpascualg/CrossGrad/blob/master/crossgrad.py#L28
I chose to make the domain and label classifiers both work with the same dimensionality via this parameter, so that when concatenating (features = tf.concat((features, latent), axis=-1) in the .ipynb inside label_fn) the final size if [batch, latent_space_dimensions * 2].
Either case, that is was my choice, I don't recall the paper specifically mentioning dimensions. I hope this clears a bit the param.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I'm not using TF 1.x anymore (switched to 2.x), so I don't know the exact steps, but it would be something along the lines of using these outputs here: The variable I don't understand what you mean by the domain number is 2, but the code should be generic enough to admit any/all combinations of input shapes, outputs and number of classes. |
I am confused of the relationship between d1 and d2 , which is the predicted classes according to the domain classifier, can you explain it a little bit more clearly? |
Line 125 in 2cb4783
And when I test the new domain samples, the result of d1 must be wrong, why return d1? |
I'm sorry but it's been too long since I read that paper and implemented the code. If I'm not wrong, which I might be, As for the return, I return all the data I deem relevant for inspection. Returning it allows you to manually retrieve the values, summarize them, plot, or whatever you deem necessary. There is no real implication in retuning it. |
Hi, thank you for your code! I want to ask a question,what is the "latent_space_dimensions"? I did not find this variable in the paper.
The text was updated successfully, but these errors were encountered: