Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECE computation #3

Open
Felix-Petersen opened this issue Mar 17, 2024 · 1 comment
Open

ECE computation #3

Felix-Petersen opened this issue Mar 17, 2024 · 1 comment

Comments

@Felix-Petersen
Copy link

How do you compute the ECE in the code? I looked through the entire EDM code, but I couldn't find it.

Also, a typo in the paper: Table 5, caption: You do not report precision / recall, so you could remove that from the caption.

@AlexMaOLS
Copy link
Owner

AlexMaOLS commented Mar 24, 2024

Thank you so much for pointing out the typo! I originally added precision and recall but the content is too wide to present properly in the table. Below are the full results:

ImageNet 128x128 Classifier IS FID Pre Rec
Diffusion baseline - - 5.91 0.70 0.65
Diffusion Finetune guided Fine-tune 182.69 2.97 0.78 0.59
Classifier-free Diffusion - 158.47 2.43 - -
Diffusion ResNet50 guided (ours) Off-the-Shelf 183.51 2.36 0.77 0.60
Diffusion ResNet101 guided (ours) Off-the-Shelf 187.83 2.19 0.79 0.58

I am sorry that I currently do not have access to the codes since I left the company where I did the research intern. But I can share the code function for computing ECE, basically, we apply this ECE function during the diffusion backward process for estimating the ECE score of the classifer's predicted probability .

import numpy as np

# get ECE score
def get_ece_score(py, y_test, n_bins=10):
    py = np.asarray(py)
    y_test = np.asarray(y_test)
    if y_test.ndim > 1:
        y_test = np.argmax(y_test, axis=1)
    py_index = np.argmax(py, axis=1)
    py_value = []
    for i in range(py.shape[0]):
        py_value.append(py[i, py_index[i]])
    py_value = np.array(py_value)
    acc, conf = np.zeros(n_bins), np.zeros(n_bins)
    Bm = np.zeros(n_bins)
    for m in range(n_bins):
        a, b = m/n_bins, (m+1)/n_bins
        for i in range(py.shape[0]):
            if py_value[i] > a and py_value[i] <= b:
                Bm[m] += 1
                if py_index[i] == y_test[i]:
                    acc[m] += 1
                conf[m] += py_value[i]
        if Bm[m] != 0:
            acc[m] = acc[m] / Bm[m]
            conf[m] = conf[m] / Bm[m]
    ece = 0
    for m in range(n_bins):
        ece += Bm[m] * np.abs((acc[m] - conf[m]))
    return ece / sum(Bm)

# total_pre_probs_array: softmax probability matrix
# total_onehot_labels_array: truth label one-hot matrix
ece_score = get_ece_score(py=total_pre_probs_array, y_test=total_onehot_labels_array, n_bins=10)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants