You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have some questions when training and testing, which make me bothered.
The parameter you used for training shift model is the default parameter of the code you provied?
You said: "In my experiments, the model took something like 2K iterations to reach chance performance (loss:label = 0.693), and 11K iterations to do better than chance (loss:label = 0.692). So, for a long time it looked like the model was stuck at chance."
So my question is, after the model can do better than chance, and the "loss:label" will decrease faster?
when do testing, I mean the testing you calculate the accuracy on the test set as mentioned in your paper. In your paper, you mentioned We found that the model obtained 59.9% accuracy on held-out videos for its alignment task (chance = 50%). My question is, the parameter "do_shift" should set to False or True? When I set it to True, the accuracy is 0.50633484. Set to False, I got an accuracy of 0.43133482. Both are quite different from the 0.599 reported in your paper. By the way, I use the same code reading dataset, I use the pre-trained model you provided. The dataset is generated from AudioSet.
Here is the code for testing, I only add a function in "class NetClf" in the "shift_net.py".
Hello, I have some questions when training and testing, which make me bothered.
So my question is, after the model can do better than chance, and the "loss:label" will decrease faster?
Here is the code for testing, I only add a function in "class NetClf" in the "shift_net.py".
And do testing like this:
The text was updated successfully, but these errors were encountered: