-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How was number of trainable parameters calculated? #31
Comments
could you tell me your best result? |
@Faur @dongzhuoyao Hi, do you still remember the FLOPs of the model when you runed fc-densenet, the reviewer of my paper asked me to write the parameters and FLOPs values, but I failed to run because the dataset could not be loaded, I look forward to your reply, thank you very much |
@xiaomixiaomi123zm I am sorry, but I can't help you. I has been a while since I worked on this, and I don't have access to the code base as is. But if the issue is just the dataset you should be able to make some dumme data with |
Hi, I don’t quite understand how to do it. Is it convenient for you to add my qq( 907675183 ) and tell me about it? I have tried a lot of ways, and I have not been successful, thank you very much!! @Faur |
I have tried to re-implement the architecture described in the paper exactly, just in TensorFlow. But don't get the correct number of trainable parameters. I can't find where this is calculated, so I was hoping someone could help me out.
Paper:
56 layer: 1.5 mil.
103 layer: 9.4 mil
My implementation:
56 layer: 1.4 mil
103 layer: 9.2 mil
The discrepancy is small, so normally I wouldn't care, but I can't quite get the same performance results as in the paper, so perhaps this could help reveal any bugs in my code.
The text was updated successfully, but these errors were encountered: