-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this project only work on linux? #47
Comments
Currently, the Fairseq version is complete but works only on Linux due to limitations with Fairseq itself. However, for inference, you can refer to the Colab example in the README, which uses the transformers library and can be executed on other operating systems. In addition, could you clarify which tutorials you’re looking for? Are you asking for a tutorial on Fairseq? |
Thank you. |
May I ask, in which file path are your trained models saved? I want to fine tune him. |
|
In the path of models/ofa you mentioned, the directories at the same level as it include clip, taming, as well as init.py, search.py, sequence_gemerator.py, what is the connection and difference between them? Can you briefly describe the relationship between the various files throughout the project? Which ones need core attention? |
The sequence_generator.py script is used to decode or generate outputs based on specified hyperparameters, such as model selection and search strategies (e.g., beam search defined in search.py). The taming module, inherited from VQGAN (https://github.com/CompVis/taming-transformers), is used to encode image patches for preprocessing tasks (refer to masked image modeling in the paper) or to generate image patches. The clip module defines image encoders, such as ResNet and ViT, for training. |
How to run this biomedgpt big model locally? Any recommended links to related tutorials?
The text was updated successfully, but these errors were encountered: