You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing this interesting research and the codes.
It seems like the result of the pretrained model and weight average is different from the result report in paper task arithmetic[1], adamerging[2], surgery[3], and many other works, while you all use the vit model of CLIP. Could you explain the possible reasons for this?
[1] Editing Models with Task Arithmetic
[2] ADAMERGING: ADAPTIVE MODEL MERGING FOR MULTI-TASK LEARNING
[3] Representation Surgery for Multi-Task Model Merging
Thank you.
The text was updated successfully, but these errors were encountered:
Thanks a lot for your interest in our work and the question.
We have noted that we used a different split for the DTD dataset when creating the checkpoint, which leads to the difference in the result compared to the original task arithmetic paper.
Please note that we have updated the DTD checkpoints and updated the Readme file for how to download them.
Alternatively, you could also run our code with the checkpoints provided in other papers to minimize the influence from different checkpoints when comparing the methods.
Please let us know if you have any other questions.
Hello.
Thank you for sharing this interesting research and the codes.
It seems like the result of the pretrained model and weight average is different from the result report in paper task arithmetic[1], adamerging[2], surgery[3], and many other works, while you all use the vit model of CLIP. Could you explain the possible reasons for this?
[1] Editing Models with Task Arithmetic
[2] ADAMERGING: ADAPTIVE MODEL MERGING FOR MULTI-TASK LEARNING
[3] Representation Surgery for Multi-Task Model Merging
Thank you.
The text was updated successfully, but these errors were encountered: