You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PyTorch version (GPU?): torch | 2.0.1+cu117 | 2.1.0
peft: 0.5.0
Details
Hello,
How can we use the provided library for ViT and dinov2?
I checked the documentation and found this page. However, I don't know how to use it in my code.
Also, as Dinov2 is an state-of-the-art model with a promising performance in many tasks, it would be wonderful to have a adapter for it.
The text was updated successfully, but these errors were encountered:
Hey @FBehrad , the ViT is supported you can use the ViTAdapterModel which you can load with from_pretrained as you would with transformers. The model provides all the adapter functionality like adding, activating and training adapters. You can also use the transformer model classes as they provide the adapters functionality as well. To get started it might be helpful to check out the quickstart in the documentation and the example notebooks.
But we currently don't support dinov2. I will change the label of this issue to enhancement and leave this open as a feature request.
Environment info
adapter-transformers
version: ?Details
Hello,
How can we use the provided library for ViT and dinov2?
I checked the documentation and found this page. However, I don't know how to use it in my code.
Also, as Dinov2 is an state-of-the-art model with a promising performance in many tasks, it would be wonderful to have a adapter for it.
The text was updated successfully, but these errors were encountered: