-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the mAR #53
Comments
Here are the code for accuracy: UniFormerV2/slowfast/utils/metrics.py Lines 9 to 64 in 722a434
The below code is modified by GPT, please try it def mean_recall(preds, labels, num_classes):
"""
Calculate the mean recall given predictions and labels.
Args:
preds (Tensor): Predictions from the model. Dimension is N x ClassNum.
labels (Tensor): True labels. Dimension is N.
num_classes (int): Number of classes.
Returns:
mean_recall (float): The mean recall over all classes.
"""
assert preds.size(0) == labels.size(0), "Batch dim of predictions and labels must match"
# Convert predictions to class indices
_, predicted_classes = preds.max(dim=1)
# Initialize TP and FN counters
TP = [0] * num_classes
FN = [0] * num_classes
# Count TP and FN for each class
for i in range(num_classes):
TP[i] = ((predicted_classes == i) & (labels == i)).sum().item()
FN[i] = ((predicted_classes != i) & (labels == i)).sum().item()
# Calculate recall for each class
recalls = [TP[i] / (TP[i] + FN[i]) if (TP[i] + FN[i]) > 0 else 0 for i in range(num_classes)]
# Calculate mean recall
mean_recall = sum(recalls) / num_classes
return mean_recall |
Thank you for your response!I will try it later. |
链接: https://pan.baidu.com/s/1T8omLX_HE88CdbFpVupalA 密码: f4vw |
非常感谢您的回复,目前我已经在您的模型上测试了k400_b16_f8x224模型在k400数据集上的表现,得到了83.38的结果,跟您给出的点数基本一致,于是我想要去测试一下您的k400+k710_l14_f64x336模型,但是测试在我的单卡3090上好像没办法启动,请问您能给我一些配置文件的修改意见么,我将贴出我现在使用的test.sh和config.yaml config.yaml PATH_TO_DATA_DIR: path-to-imagenet-dirTRAIN_JITTER_SCALES_RELATIVE: [0.08, 1.0] 恳切希望得到您的答复,非常感谢 |
单卡3090,跑64帧应该显存不够,可以跑16帧或者32帧,结果差不多的嘞。你也可以试试我们新的模型,显存开销更小,结果更好 |
哇 非常感谢作者大大 我回头试试你们的新模型! |
作者大大您好 目前我这边想要将模型应用到其他行为标签上 大概可能有100种左右的动作 准备数据是否可以参考k400数据集来做 仿照您提供的annotation files 也就是kinetic_categories.txt train.csv test.csv 来构建自己的行为标签 训练集标注文件和测试集标注文件 然后再在config文件中修改num_class后进行训练和测试呢 此外,我想要测试的其他行为属于比较常见的行为 比如打字、写字、打电话、做饭、撑伞等等 我想要小规模标注一些数据进行微调和测试 应该也能得到比较不错的效果? 恳切希望得到您的答复,非常感谢. |
可以的,可以先小规模标一下,然后划分train和val微调K400的checkpoint试试 |
您好 请问我发现在k400的任务中 LMHRA和T-Down都没有使用到 查看了论文 是因为k400的任务单纯使用global的设计就能达到最佳性能么 如果我要尝试修改模型在k400数据集上提点 是否应该从修改global的设计入手呢 恳切希望得到您的答复 |
I've observed a common practice in video understanding models and research papers where the recall rate is essentially not provided. Recently, my teacher assigned me a project that requires both accuracy (acc) and recall (AR). When I explained to him that papers usually don't include recall, he suggested running the code and modifying it to output the recall rate. However, I'm uncertain about how to proceed with this. I would greatly appreciate your prompt response. Thank you!
The text was updated successfully, but these errors were encountered: