-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the rocauc value #52
Comments
It seems that it is calculated after taking the mean value, so I have no doubt. |
Hey @IItaly , if you are referring to the Analyze results notebook you are right, in the Edoardo P.s. In any case, please be aware that it is not strictly necessary to have scores normalized between 0-1, as the ROC curve from sklearn will automatically find the appropriate thresholds independently from the numeric range of the scores. It is fairer though, as this is the way we presented results in the paper, so we will fix this as soon as possible :) |
Thank you for your reply. Maybe that's why I got a higher score than that reported in your paper? |
It seems that when calculating the score of video level, the average value is taken here, and the score will be normalized between 0-1.Is this inaccurate calculation of AUC? |
Hey @IItaly ,
it might be, did you manage to re-run the pipeline? In #47 we found out that you used a higher number of iterations wrt those used in the paper right?
We compute the average of the non-sigmoid scores for all frames. That will be our raw score for the video, that then must be normalized between 0 and 1 for computing the ROC curve. Is that what you were asking?
Instead of looking for where the score is > or < than 0, you can directly compute the normalized score with something along this line Edoardo |
The score is still higher while 20000 iterations.
Yes.I want to get the correct AUC value.
I did it like this pic, but the final value didn't change |
Then yes, you should mean the score for all frames for each video, and the normalize between 0 and 1 with the expit function (or any sigmoid function you prefer).
I'm sorry I'm not sure I understood, computing the score the way you did in the picture you obtained a similar value to the one you had without normalizing? |
Yes, you're right. I did as shown in the picture, but the final value didn't change. I think I can try it your way. |
Hey @IItaly , you're right, you shouldn't need it. |
The result of not using expit () is the same as that of using it. Maybe there's no need to change score between 0-1? |
Hey,I want to know what is accbal? |
Hey @IItaly , sorry for the late reply.
From an high level perspective, the results should not be too dissimilar, as the sigmoid function simply normalizes the scores on a scale between 0-1, but then if the network behaves well we should be able to see a clear distinction between FAKE and REAL raw scores ( = not normalized with sigmoid).
It's the balanced accuracy from scikit-learn, you can find the explanation here https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html . Edoardo |
Thank you very much. I'll sort out my results for you to compare~ |
Are you also doing research on deepfakes?Italy |
Hi, @CrohnEngineer
I found that the way to calculate AUC value is a bit strange. The score is used here, and the score is a number that has not changed, some of which are greater than 1. Is this the right way to use this function?
rocauc = M.roc_auc_score(df_res['label'],df_res['score'])
The text was updated successfully, but these errors were encountered: