You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I've been working on a tiny clip model (~70MB) which I've outlined in this kaggle kernel. Essentiall it is taking microsoft/xtremedistil-l6-h256-uncased for the language transformer and edgenext_small from timm and doing CLIP loss over the coco dataset.
A few questions I have are:
Would such a system fit into the intentions of this repo?
What language transformer are you using in this repo? Or are you training all of this from scratch.
Is it safe to assume that you are only concerned about English captions for now?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello. I've been working on a tiny clip model (~70MB) which I've outlined in this kaggle kernel. Essentiall it is taking
microsoft/xtremedistil-l6-h256-uncased
for the language transformer andedgenext_small
from timm and doing CLIP loss over the coco dataset.A few questions I have are:
Would love to contribute in anyway I could :).
Beta Was this translation helpful? Give feedback.
All reactions